title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Appendix B. Understanding the luks_tang_inventory.yml file
Appendix B. Understanding the luks_tang_inventory.yml file B.1. Configuration parameters for disk encryption hc_nodes (required) A list of hyperconverged hosts that uses the back-end FQDN of the host, and the configuration details of those hosts. Configuration that is specific to a host is defined under that host's back-end FQDN. Configuration that is common to all hosts is defined in the vars: section. blacklist_mpath_devices (optional) By default, Red Hat Virtualization Host enables multipath configuration, which provides unique multipath names and worldwide identifiers for all disks, even when disks do not have underlying multipath configuration. Include this section if you do not have multipath configuration so that the multipath device names are not used for listed devices. Disks that are not listed here are assumed to have multipath configuration available, and require the path format /dev/mapper/<WWID> instead of /dev/sdx when defined in subsequent sections of the inventory file. On a server with four devices (sda, sdb, sdc and sdd), the following configuration blacklists only two devices. The path format /dev/mapper/<WWID> is expected for devices not in this list. gluster_infra_luks_devices (required) A list of devices to encrypt and the encryption passphrase to use for each device. devicename The name of the device in the format /dev/sdx . passphrase The password to use for this device when configuring encryption. After disk encryption with Network-Bound Disk Encryption (NBDE) is configured, a new random key is generated, providing greater security. rootpassphrase (required) The password that you used when you selected Encrypt my data during operating system installation on this host. rootdevice (required) The root device that was encrypted when you selected Encrypt my data during operating system installation on this host. networkinterface (required) The network interface this host uses to reach the NBDE key server. ip_version (required) Whether to use IPv4 or IPv6 networking. Valid values are IPv4 and IPv6 . There is no default value. Mixed networks are not supported. ip_config_method (required) Whether to use DHCP or static networking. Valid values are dhcp and static . There is no default value. The other valid value for this option is static , which requires the following additional parameters and is defined individually for each host: gluster_infra_tangservers The address of your NBDE key server or servers, including http:// . If your servers use a port other than the default (80), specify a port by appending :_port_ to the end of the URL. B.2. Example luks_tang_inventory.yml Dynamically allocated IP addresses Static IP addresses
[ "hc_nodes: hosts: host1backend.example.com: [configuration specific to this host] host2backend.example.com: host3backend.example.com: host4backend.example.com: host5backend.example.com: host6backend.example.com: vars: [configuration common to all hosts]", "hc_nodes: hosts: host1backend.example.com: blacklist_mpath_devices: - sdb - sdc", "hc_nodes: hosts: host1backend.example.com: gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: Str0ngPa55#", "hc_nodes: hosts: host1backend.example.com: rootpassphrase: h1-Str0ngPa55#", "hc_nodes: hosts: host1backend.example.com: rootdevice: /dev/sda2", "hc_nodes: hosts: host1backend.example.com: networkinterface: ens3s0f0", "hc_nodes: vars: ip_version: IPv4", "hc_nodes: vars: ip_config_method: dhcp", "hc_nodes: hosts: host1backend.example.com : ip_config_method: static host_ip_addr: 192.168.1.101 host_ip_prefix: 24 host_net_gateway: 192.168.1.100 host2backend.example.com : ip_config_method: static host_ip_addr: 192.168.1.102 host_ip_prefix: 24 host_net_gateway: 192.168.1.100 host3backend.example.com : ip_config_method: static host_ip_addr: 192.168.1.102 host_ip_prefix: 24 host_net_gateway: 192.168.1.100", "hc_nodes: vars: gluster_infra_tangservers: - url: http:// key-server1.example.com - url: http:// key-server2.example.com : 80", "hc_nodes: hosts: host1-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host1-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host2-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host2-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host3-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host3-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 vars: ip_version: IPv4 ip_config_method: dhcp gluster_infra_tangservers: - url: http:// key-server1.example.com :80 - url: http:// key-server2.example.com :80", "hc_nodes: hosts: host1-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host1-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway host2-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host2-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway host3-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host3-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway vars: ip_version: IPv4 ip_config_method: static gluster_infra_tangservers: - url: http:// key-server1.example.com :80 - url: http:// key-server2.example.com :80" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/understanding-the-luks_tang_inventory-yml-file
Chapter 4. Fixed issues
Chapter 4. Fixed issues This release incorporates all of the fixed issues in the community release of Node.js 22 LTS.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/release_notes_for_node.js_22/fixed-issues-nodejs
Chapter 36. floating
Chapter 36. floating This chapter describes the commands under the floating command. 36.1. floating ip create Create floating IP Usage: Table 36.1. Positional arguments Value Summary <network> Network to allocate floating ip from (name or id) Table 36.2. Command arguments Value Summary -h, --help Show this help message and exit --subnet <subnet> Subnet on which you want to create the floating ip (name or ID) --port <port> Port to be associated with the floating ip (name or ID) --floating-ip-address <ip-address> Floating ip address --fixed-ip-address <ip-address> Fixed ip address mapped to the floating ip --qos-policy <qos-policy> Attach qos policy to the floating ip (name or id) --description <description> Set floating ip description --project <project> Owner's project (name or id) --dns-domain <dns-domain> Set dns domain for this floating ip --dns-name <dns-name> Set dns name for this floating ip --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --tag <tag> Tag to be added to the floating ip (repeat option to set multiple tags) --no-tag No tags associated with the floating ip Table 36.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 36.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 36.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 36.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 36.2. floating ip delete Delete floating IP(s) Usage: Table 36.7. Positional arguments Value Summary <floating-ip> Floating ip(s) to delete (ip address or id) Table 36.8. Command arguments Value Summary -h, --help Show this help message and exit 36.3. floating ip list List floating IP(s) Usage: Table 36.9. Command arguments Value Summary -h, --help Show this help message and exit --network <network> List floating ip(s) according to given network (name or ID) --port <port> List floating ip(s) according to given port (name or ID) --fixed-ip-address <ip-address> List floating ip(s) according to given fixed ip address --floating-ip-address <ip-address> List floating ip(s) according to given floating ip address --long List additional fields in output --status <status> List floating ip(s) according to given status ( ACTIVE , DOWN ) --project <project> List floating ip(s) according to given project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --router <router> List floating ip(s) according to given router (name or ID) --tags <tag>[,<tag>,... ] List floating ip which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List floating ip which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude floating ip which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude floating ip which have any given tag(s) (Comma-separated list of tags) Table 36.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 36.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 36.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 36.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 36.4. floating ip pool list List pools of floating IP addresses Usage: Table 36.14. Command arguments Value Summary -h, --help Show this help message and exit Table 36.15. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 36.16. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 36.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 36.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 36.5. floating ip port forwarding create Create floating IP port forwarding Usage: Table 36.19. Positional arguments Value Summary <floating-ip> Floating ip that the port forwarding belongs to (ip address or ID) Table 36.20. Command arguments Value Summary -h, --help Show this help message and exit --internal-ip-address <internal-ip-address> The fixed ipv4 address of the network port associated to the floating IP port forwarding --port <port> The name or id of the network port associated to the floating IP port forwarding --internal-protocol-port <port-number> The protocol port number of the network port fixed IPv4 address associated to the floating IP port forwarding --external-protocol-port <port-number> The protocol port number of the port forwarding's floating IP address --protocol <protocol> The protocol used in the floating ip port forwarding, for instance: TCP, UDP --description <description> A text to describe/contextualize the use of the port forwarding configuration Table 36.21. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 36.22. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 36.23. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 36.24. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 36.6. floating ip port forwarding delete Delete floating IP port forwarding Usage: Table 36.25. Positional arguments Value Summary <floating-ip> Floating ip that the port forwarding belongs to (ip address or ID) <port-forwarding-id> The id of the floating ip port forwarding(s) to delete Table 36.26. Command arguments Value Summary -h, --help Show this help message and exit 36.7. floating ip port forwarding list List floating IP port forwarding Usage: Table 36.27. Positional arguments Value Summary <floating-ip> Floating ip that the port forwarding belongs to (ip address or ID) Table 36.28. Command arguments Value Summary -h, --help Show this help message and exit --port <port> Filter the list result by the id or name of the internal network port --external-protocol-port <port-number> Filter the list result by the protocol port number of the floating IP --protocol protocol Filter the list result by the port protocol Table 36.29. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 36.30. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 36.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 36.32. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 36.8. floating ip port forwarding set Set floating IP Port Forwarding Properties Usage: Table 36.33. Positional arguments Value Summary <floating-ip> Floating ip that the port forwarding belongs to (ip address or ID) <port-forwarding-id> The id of the floating ip port forwarding Table 36.34. Command arguments Value Summary -h, --help Show this help message and exit --port <port> The id of the network port associated to the floating IP port forwarding --internal-ip-address <internal-ip-address> The fixed ipv4 address of the network port associated to the floating IP port forwarding --internal-protocol-port <port-number> The tcp/udp/other protocol port number of the network port fixed IPv4 address associated to the floating IP port forwarding --external-protocol-port <port-number> The tcp/udp/other protocol port number of the port forwarding's floating IP address --protocol <protocol> The ip protocol used in the floating ip port forwarding --description <description> A text to describe/contextualize the use of the port forwarding configuration 36.9. floating ip port forwarding show Display floating IP Port Forwarding details Usage: Table 36.35. Positional arguments Value Summary <floating-ip> Floating ip that the port forwarding belongs to (ip address or ID) <port-forwarding-id> The id of the floating ip port forwarding Table 36.36. Command arguments Value Summary -h, --help Show this help message and exit Table 36.37. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 36.38. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 36.39. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 36.40. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 36.10. floating ip set Set floating IP Properties Usage: Table 36.41. Positional arguments Value Summary <floating-ip> Floating ip to modify (ip address or id) Table 36.42. Command arguments Value Summary -h, --help Show this help message and exit --port <port> Associate the floating ip with port (name or id) --fixed-ip-address <ip-address> Fixed ip of the port (required only if port has multiple IPs) --description <description> Set floating ip description --qos-policy <qos-policy> Attach qos policy to the floating ip (name or id) --no-qos-policy Remove the qos policy attached to the floating ip --tag <tag> Tag to be added to the floating ip (repeat option to set multiple tags) --no-tag Clear tags associated with the floating ip. specify both --tag and --no-tag to overwrite current tags 36.11. floating ip show Display floating IP details Usage: Table 36.43. Positional arguments Value Summary <floating-ip> Floating ip to display (ip address or id) Table 36.44. Command arguments Value Summary -h, --help Show this help message and exit Table 36.45. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 36.46. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 36.47. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 36.48. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 36.12. floating ip unset Unset floating IP Properties Usage: Table 36.49. Positional arguments Value Summary <floating-ip> Floating ip to disassociate (ip address or id) Table 36.50. Command arguments Value Summary -h, --help Show this help message and exit --port Disassociate any port associated with the floating ip --qos-policy Remove the qos policy attached to the floating ip --tag <tag> Tag to be removed from the floating ip (repeat option to remove multiple tags) --all-tag Clear all tags associated with the floating ip
[ "openstack floating ip create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--subnet <subnet>] [--port <port>] [--floating-ip-address <ip-address>] [--fixed-ip-address <ip-address>] [--qos-policy <qos-policy>] [--description <description>] [--project <project>] [--dns-domain <dns-domain>] [--dns-name <dns-name>] [--project-domain <project-domain>] [--tag <tag> | --no-tag] <network>", "openstack floating ip delete [-h] <floating-ip> [<floating-ip> ...]", "openstack floating ip list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--network <network>] [--port <port>] [--fixed-ip-address <ip-address>] [--floating-ip-address <ip-address>] [--long] [--status <status>] [--project <project>] [--project-domain <project-domain>] [--router <router>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack floating ip pool list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]", "openstack floating ip port forwarding create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --internal-ip-address <internal-ip-address> --port <port> --internal-protocol-port <port-number> --external-protocol-port <port-number> --protocol <protocol> [--description <description>] <floating-ip>", "openstack floating ip port forwarding delete [-h] <floating-ip> <port-forwarding-id> [<port-forwarding-id> ...]", "openstack floating ip port forwarding list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--port <port>] [--external-protocol-port <port-number>] [--protocol protocol] <floating-ip>", "openstack floating ip port forwarding set [-h] [--port <port>] [--internal-ip-address <internal-ip-address>] [--internal-protocol-port <port-number>] [--external-protocol-port <port-number>] [--protocol <protocol>] [--description <description>] <floating-ip> <port-forwarding-id>", "openstack floating ip port forwarding show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <floating-ip> <port-forwarding-id>", "openstack floating ip set [-h] [--port <port>] [--fixed-ip-address <ip-address>] [--description <description>] [--qos-policy <qos-policy> | --no-qos-policy] [--tag <tag>] [--no-tag] <floating-ip>", "openstack floating ip show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <floating-ip>", "openstack floating ip unset [-h] [--port] [--qos-policy] [--tag <tag> | --all-tag] <floating-ip>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/floating
CLI tools
CLI tools OpenShift Dedicated 4 Learning how to use the command-line tools for OpenShift Dedicated Red Hat OpenShift Documentation Team
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhocp-4-for-rhel-8-x86_64-rpms\"", "yum install openshift-clients", "oc <command>", "brew install openshift-cli", "oc <command>", "oc login -u user1", "Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started.", "oc login <cluster_url> --web 1", "Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session.", "Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>", "oc new-project my-project", "Now using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc new-app https://github.com/sclorg/cakephp-ex", "--> Found image 40de956 (9 days old) in imagestream \"openshift/php\" under tag \"7.2\" for \"php\" Run 'oc status' to view your app.", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>", "oc logs cakephp-ex-1-deploy", "--> Scaling cakephp-ex-1 to 1 --> Success", "oc project", "Using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc status", "In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details.", "oc api-resources", "NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap", "oc help", "OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application", "oc create --help", "Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags]", "oc explain pods", "KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources", "oc logout", "Logged \"user1\" out on \"https://openshift.example.com\"", "oc completion bash > oc_bash_completion", "sudo cp oc_bash_completion /etc/bash_completion.d/", "cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF", "apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k", "oc status", "status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example.", "oc project", "Using project \"joe-project\" from context named \"joe-project/openshift1.example.com:8443/alice\" on server \"https://openshift1.example.com:8443\".", "oc project alice-project", "Now using project \"alice-project\" on server \"https://openshift1.example.com:8443\".", "oc login -u system:admin -n default", "oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true]", "oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>]", "oc config use-context <context_nickname>", "oc config set <property_name> <property_value>", "oc config unset <property_name>", "oc config view", "oc config view --config=<specific_filename>", "oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config view", "apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config set-context `oc config current-context` --namespace=<project_name>", "oc whoami -c", "#!/bin/bash optional argument handling if [[ \"USD1\" == \"version\" ]] then echo \"1.0.0\" exit 0 fi optional argument handling if [[ \"USD1\" == \"config\" ]] then echo USDKUBECONFIG exit 0 fi echo \"I am a plugin named kubectl-foo\"", "chmod +x <plugin_file>", "sudo mv <plugin_file> /usr/local/bin/.", "oc plugin list", "The following compatible plugins are available: /usr/local/bin/<plugin_file>", "oc ns", "Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in \"pod.json\" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description-", "Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io", "Print the supported API versions oc api-versions", "Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap", "Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json", "Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true", "View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json", "Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx", "Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account \"foo\" of namespace \"dev\" can list pods # in the namespace \"prod\". # You must be allowed to use impersonation for the global option \"--as\". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace (\"*\" means all) oc auth can-i '*' '*' # Check to see if I can get the job named \"bar\" in namespace \"foo\" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace \"foo\" oc auth can-i --list --namespace=foo", "Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml", "Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json", "Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80", "Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new", "Print the address of the control plane and cluster services oc cluster-info", "Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf \" # oc shell completion source 'USDHOME/.kube/completion.bash.inc' \" >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > \"USD{fpath[1]}/_oc\" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\\.kube\\completion.ps1 Add-Content USDPROFILE \"USDHOME\\.kube\\completion.ps1\" ## Execute completion code in the profile Add-Content USDPROFILE \"if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }\" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE", "Display the current-context oc config current-context", "Delete the minikube cluster oc config delete-cluster minikube", "Delete the context for the minikube cluster oc config delete-context minikube", "Delete the minikube user oc config delete-user minikube", "List the clusters that oc knows about oc config get-clusters", "List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context", "List the users that oc knows about oc config get-users", "Generate a new admin kubeconfig oc config new-admin-kubeconfig", "Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig", "Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's API server oc config refresh-ca-bundle --dry-run", "Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name", "Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo \"cert_data_here\" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set the custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set the proxy URL for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4", "Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin", "Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the \"cluster-admin\" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the \"cluster-admin\" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional arguments oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Enable new exec auth plugin for the \"cluster-admin\" entry with interactive mode oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never # Define new exec auth plugin arguments for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-", "Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace", "Use the context for the minikube cluster oc config use-context minikube", "Show merged kubeconfig settings oc config view # Show merged kubeconfig settings, raw certificate data, and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json", "Create a new build oc create build myapp", "Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10", "Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named \"pod-reader\" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named \"foo\" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named \"foo\" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name \"foo\" with NonResourceURL specified oc create clusterrole \"foo\" --verb=get --non-resource-url=/logs/* # Create a cluster role name \"monitoring\" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "Create a cron job oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 # Create a deployment named my-dep that runs multiple containers oc create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx", "Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx", "Create an identity with identity provider \"acme_ldap\" and the identity provider username \"adamjones\" oc create identity acme_ldap:adamjones", "Create a new image stream oc create imagestream mysql", "Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0", "Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a TLS secret \"my-cert\" oc create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\" # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\" oc create ingress catch-all --class=otheringress --rule=\"/path=svc:port\" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" --annotation ingress.annotation1=foo --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default --rule=\"foo.com/=svc:port\" --rule=\"foo.com/admin/=svcadmin:portadmin\" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default --rule=\"foo.com/path*=svc:8080\" --rule=\"bar.com/admin*=svc2:http\" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default --rule=\"foo.com/=svc:https,tls\" --rule=\"foo.com/path/subpath*=othersvc:8080\" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default --rule=\"foo.com/*=svc:8080,tls=secret1\" # Create an ingress with a default backend oc create ingress ingdefault --class=default --default-backend=defaultsvc:http --rule=\"foo.com/*=svc:8080,tls=secret1\"", "Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named \"a-cronjob\" oc create job test-job --from=cronjob/a-cronjob", "Create a new namespace named my-namespace oc create namespace my-namespace", "Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50%", "Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort", "Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named \"pod-reader\" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named \"foo\" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named \"foo\" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status", "Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev", "Create an edge route named \"my-route\" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets", "Create a passthrough route named \"my-route\" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com", "Create a route named \"my-route\" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend", "If you do not already have a .dockercfg file, create a dockercfg secret directly oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key", "Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip=\"None\"", "Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com", "Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080", "Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080", "Create a new service account named my-service-account oc create serviceaccount my-service-account", "Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific UID oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc", "Create a user with the username \"ajones\" and the display name \"Adam Jones\" oc create user ajones --full-name=\"Adam Jones\"", "Map the identity \"acme_ldap:adamjones\" to the user \"ajones\" oc create useridentitymapping acme_ldap:adamjones ajones", "Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Debug a Windows node # Note: the chosen image must match the Windows Server version (2019, 2022) of the node oc debug node/win-worker-1 --image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns", "Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names \"baz\" and \"foo\" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all", "Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in \"pod.json\" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe pods -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend", "Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f -", "Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR=\"nano\" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the 'status' subresource for the 'mydeployment' deployment oc edit deployment mydeployment --subresource='status'", "List recent events in the default namespace oc events # List recent events in all namespaces oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive oc events --for pod/web-pod-13je7 --watch # List recent events in YAML format oc events -oyaml # List recent only events of type 'Warning' or 'Normal' oc events --types=Warning,Normal", "Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date", "Get the documentation of the resource and its fields oc explain pods # Get all the fields in the resource oc explain pods --recursive # Get the explanation for deployment in supported api versions oc explain deployments --api-version=apps/v1 # Get the documentation of a specific field of a resource oc explain pods.spec.containers # Get the documentation of resources in different format oc explain deployment --output=plaintext-openapiv2", "Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx", "Extract the secret \"test\" to the current directory oc extract secret/test # Extract the config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map \"nginx\" to STDOUT oc extract configmap/nginx --to=- # Extract only the key \"nginx.conf\" from config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf", "List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List the 'status' subresource for a single pod oc get pod web-pod-13je7 --subresource status", "Starts an auth code flow to the issuer URL with the client ID and the given extra scopes oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile # Starts an auth code flow to the issuer URL with a different callback address oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343", "Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt", "Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{\"Entrypoint\":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz", "Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:]", "Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64", "Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all exist # You must use a registry with sparse registry support enabled oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=linux/386 --keep-manifest-list=true", "Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm", "Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in \"pod.json\" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar-", "Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280 # Log in to the external OIDC issuer through Auth Code + PKCE by starting a local server listening on port 8080 oc login localhost:8443 --exec-plugin=oc-oidc --client-id=client-id --extra-scopes=email,profile --callback-port=8080", "Log out oc logout", "Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container", "List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match \"ruby\" oc new-app --search ruby # Search for \"ruby\", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for \"ruby\" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml", "Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp", "Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name=\"Web Team Development\" --description=\"Development project for the web team.\"", "Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh", "Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\\n unschedulable: true' # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch oc patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]' # Update a deployment's replicas through the 'scale' subresource using a merge patch oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'", "List all available plugins oc plugin list", "Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml", "Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000", "Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f -", "Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project", "List all projects oc projects", "To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api", "Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS", "Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*USD/\\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json", "Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json", "Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx", "View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3", "Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json", "Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx", "Restart all deployments in test-namespace namespace oc rollout restart deployment -n test-namespace # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx", "Resume an already paused deployment oc rollout resume dc/nginx", "Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend", "Watch the status of the latest rollout oc rollout status dc/nginx", "Roll back to the previous deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3", "Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled", "Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir", "Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container oc run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\" # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container oc run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in \"foo.yaml\" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/example1 rc/example2 rc/example3 # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web", "Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount", "Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name", "Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script=\"/var/lib/test-image.sh param1 param2 && /var/lib/done.sh\"", "Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret", "Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir", "Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh", "Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers=\"c1\" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp", "Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all", "Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30", "Set a deployments nginx container CPU limits to \"200m and memory to 512Mi\" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero", "Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f -", "Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml", "Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml", "Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main", "List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount \"v1\" from container \"c1\" # (and by removing the volume \"v1\" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string>", "Starts build from build config \"hello-world\" oc start-build hello-world # Starts build from a previous build \"hello-world-1\" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config \"hello-world\" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config \"hello-world\" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait", "See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest", "Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d", "Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in JSON format oc version --output json # Print the OpenShift client version information for the current context oc version --client", "Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity) oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\" oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for pod \"busybox1\" to be Ready oc wait --for='jsonpath={.status.conditions[?(@.type==\"Ready\")].status}=True' pod/busybox1 # Wait for the service \"loadbalancer\" to have ingress. oc wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s", "Display the currently authenticated user oc whoami", "Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all", "Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with \"oc image mirror\" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true", "Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp", "Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp", "Copy a new bootstrap kubeconfig file to node-0 oc adm copy-to-node --copy=new-bootstrap-kubeconfig=/etc/kubernetes/kubeconfig node/node-0", "Mark node \"foo\" as unschedulable oc adm cordon foo", "Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml", "Output a template for the error page to stdout oc adm create-error-template", "Output a template for the login page to stdout oc adm create-login-template", "Output a template for the provider selection page to stdout oc adm create-provider-selection-template", "Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900", "Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2", "Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name", "Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2", "Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm", "Collect debugging data for the \"openshift-apiserver\" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the \"openshift-apiserver\" and \"kube-apiserver\" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions", "Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir", "Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm", "Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh", "Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east'", "Create the ISO image and download it in the current folder oc adm node-image create # Use a different assets folder oc adm node-image create --dir=/tmp/assets # Specify a custom image name oc adm node-image create -o=my-node.iso # Create an ISO to add a single node without using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb # Create an ISO to add a single node with a root device hint and without # using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb --root-device-hint=deviceName:/dev/sda", "Monitor a single node being added to a cluster oc adm node-image monitor --ip-addresses 192.168.111.83 # Monitor multiple nodes being added to a cluster by separating each IP address with a comma oc adm node-image monitor --ip-addresses 192.168.111.83,192.168.111.84", "Show kubelet logs from all control plane nodes oc adm node-logs --role master -u kubelet # See what logs are available in control plane nodes in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all control plane nodes oc adm node-logs --role master --path=cron", "Watch platform certificates oc adm ocp-certificates monitor-certificates", "Regenerate a leaf certificate contained in a particular secret oc adm ocp-certificates regenerate-leaf -n openshift-config-managed secret/kube-controller-manager-client-cert-key", "Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server", "Regenerate the signing certificate contained in a particular secret oc adm ocp-certificates regenerate-top-level -n openshift-kube-apiserver-operator secret/loadbalancer-serving-signer-key", "Remove a trust bundled contained in a particular config map oc adm ocp-certificates remove-old-trust -n openshift-config-managed configmaps/kube-apiserver-aggregator-client-ca --created-before 2023-06-05T14:44:06Z # Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z", "Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server", "Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret'", "Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret'", "Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share'", "Add the 'cluster-admin' cluster role to the 'cluster-admins' group oc adm policy add-cluster-role-to-group cluster-admin cluster-admins", "Add the 'system:build-strategy-docker' cluster role to the 'devuser' user oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser", "Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1", "Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2", "Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1", "Remove the 'cluster-admin' cluster role from the 'cluster-admins' group oc adm policy remove-cluster-role-from-group cluster-admin cluster-admins", "Remove the 'system:build-strategy-docker' cluster role from the 'devuser' user oc adm policy remove-cluster-role-from-user system:build-strategy-docker devuser", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml", "Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm", "Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm", "Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm", "See what the prune command would delete if run with no options oc adm prune renderedmachineconfigs # To actually perform the prune operation, the confirm flag must be appended oc adm prune renderedmachineconfigs --confirm # See what the prune command would delete if run on the worker MachineConfigPool oc adm prune renderedmachineconfigs --pool-name=worker # Prunes 10 oldest rendered MachineConfigs in the cluster oc adm prune renderedmachineconfigs --count=10 --confirm # Prunes 10 oldest rendered MachineConfigs in the cluster for the worker MachineConfigPool oc adm prune renderedmachineconfigs --count=10 --pool-name=worker --confirm", "List all rendered MachineConfigs for the worker MachineConfigPool in the cluster oc adm prune renderedmachineconfigs list --pool-name=worker # List all rendered MachineConfigs in use by the cluster's MachineConfigPools oc adm prune renderedmachineconfigs list --in-use", "Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master", "Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x", "Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x", "Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 --to=registry.example.com/your/repository --apply-release-image-signature", "Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a previous release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 --previous 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11", "Restart all the nodes, 10% at a time oc adm restart-kubelet nodes --all --directive=RemoveKubeletKubeconfig # Restart all the nodes, 20 nodes at a time oc adm restart-kubelet nodes --all --parallelism=20 --directive=RemoveKubeletKubeconfig # Restart all the nodes, 15% at a time oc adm restart-kubelet nodes --all --parallelism=15% --directive=RemoveKubeletKubeconfig # Restart all the masters at the same time oc adm restart-kubelet nodes -l node-role.kubernetes.io/master --parallelism=100% --directive=RemoveKubeletKubeconfig", "Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label myLabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule", "Show usage statistics for images oc adm top images", "Show usage statistics for image streams oc adm top imagestreams", "Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME", "Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel", "Mark node \"foo\" as schedulable oc adm uncordon foo", "View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true", "Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all", "Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4", "Wait for all cluster operators to become stable oc adm wait-for-stable-cluster # Consider operators to be stable if they report as such for 5 minutes straight oc adm wait-for-stable-cluster --minimum-stable-period 5m", "tar xvzf <file>", "echo USDPATH", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*pipelines*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-x86_64-rpms\"", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-s390x-rpms\"", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-ppc64le-rpms\"", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-aarch64-rpms\"", "yum install openshift-pipelines-client", "tkn version", "C:\\> path", "echo USDPATH", "tkn completion bash > tkn_bash_completion", "sudo cp tkn_bash_completion /etc/bash_completion.d/", "tkn", "tkn completion bash", "tkn version", "tkn pipeline --help", "tkn pipeline delete mypipeline -n myspace", "tkn pipeline describe mypipeline", "tkn pipeline list", "tkn pipeline logs -f mypipeline", "tkn pipeline start mypipeline", "tkn pipelinerun -h", "tkn pipelinerun cancel mypipelinerun -n myspace", "tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace", "tkn pipelinerun delete -n myspace --keep 5 1", "tkn pipelinerun delete --all", "tkn pipelinerun describe mypipelinerun -n myspace", "tkn pipelinerun list -n myspace", "tkn pipelinerun logs mypipelinerun -a -n myspace", "tkn task -h", "tkn task delete mytask1 mytask2 -n myspace", "tkn task describe mytask -n myspace", "tkn task list -n myspace", "tkn task logs mytask mytaskrun -n myspace", "tkn task start mytask -s <ServiceAccountName> -n myspace", "tkn taskrun -h", "tkn taskrun cancel mytaskrun -n myspace", "tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace", "tkn taskrun delete -n myspace --keep 5 1", "tkn taskrun describe mytaskrun -n myspace", "tkn taskrun list -n myspace", "tkn taskrun logs -f mytaskrun -n myspace", "tkn condition --help", "tkn condition delete mycondition1 -n myspace", "tkn condition describe mycondition1 -n myspace", "tkn condition list -n myspace", "tkn resource -h", "tkn resource create -n myspace", "tkn resource delete myresource -n myspace", "tkn resource describe myresource -n myspace", "tkn resource list -n myspace", "tkn clustertask --help", "tkn clustertask delete mytask1 mytask2", "tkn clustertask describe mytask1", "tkn clustertask list", "tkn clustertask start mytask", "tkn eventlistener -h", "tkn eventlistener delete mylistener1 mylistener2 -n myspace", "tkn eventlistener describe mylistener -n myspace", "tkn eventlistener list -n myspace", "tkn eventlistener logs mylistener -n myspace", "tkn triggerbinding -h", "tkn triggerbinding delete mybinding1 mybinding2 -n myspace", "tkn triggerbinding describe mybinding -n myspace", "tkn triggerbinding list -n myspace", "tkn triggertemplate -h", "tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`", "tkn triggertemplate describe mytemplate -n `myspace`", "tkn triggertemplate list -n myspace", "tkn clustertriggerbinding -h", "tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2", "tkn clustertriggerbinding describe myclusterbinding", "tkn clustertriggerbinding list", "tkn hub -h", "tkn hub --api-server https://api.hub.tekton.dev", "tkn hub downgrade task mytask --to version -n mynamespace", "tkn hub get [pipeline | task] myresource --from tekton --version version", "tkn hub info task mytask --from tekton --version version", "tkn hub install task mytask --from tekton --version version -n mynamespace", "tkn hub reinstall task mytask --from tekton --version version -n mynamespace", "tkn hub search --tags cli", "tkn hub upgrade task mytask --to version -n mynamespace", "tar xvf <file>", "echo USDPATH", "sudo mv ./opm /usr/local/bin/", "C:\\> path", "C:\\> move opm.exe <directory>", "opm version", "opm <command> [<subcommand>] [<argument>] [<flags>]", "opm generate <subcommand> [<flags>]", "opm generate dockerfile <dcRootDir> [<flags>]", "opm index <subcommand> [<flags>]", "opm index add [<flags>]", "opm index prune [<flags>]", "opm index prune-stranded [<flags>]", "opm index rm [<flags>]", "opm init <package_name> [<flags>]", "opm migrate <index_ref> <output_dir> [<flags>]", "opm render <index_image | bundle_image | sqlite_file> [<flags>]", "opm serve <source_path> [<flags>]", "opm validate <directory> [<flags>]", "tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.38.0-ocp\",", "tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.38.0-ocp\",", "operator-sdk <command> [<subcommand>] [<argument>] [<flags>]", "operator-sdk completion bash", "bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/cli_tools/index
Chapter 5. OpenShift Data Foundation upgrade overview
Chapter 5. OpenShift Data Foundation upgrade overview As an operator bundle managed by the Operator Lifecycle Manager (OLM), OpenShift Data Foundation leverages its operators to perform high-level tasks of installing and upgrading the product through ClusterServiceVersion (CSV) CRs. 5.1. Upgrade Workflows OpenShift Data Foundation recognizes two types of upgrades: Z-stream release upgrades and Minor Version release upgrades. While the user interface workflows for these two upgrade paths are not quite the same, the resulting behaviors are fairly similar. The distinctions are as follows: For Z-stream releases, OCS will publish a new bundle in the redhat-operators CatalogSource . The OLM will detect this and create an InstallPlan for the new CSV to replace the existing CSV. The Subscription approval strategy, whether Automatic or Manual, will determine whether the OLM proceeds with reconciliation or waits for administrator approval. For Minor Version releases, OpenShift Container Storage will also publish a new bundle in the redhat-operators CatalogSource . The difference is that this bundle will be part of a new channel, and channel upgrades are not automatic. The administrator must explicitly select the new release channel. Once this is done, the OLM will detect this and create an InstallPlan for the new CSV to replace the existing CSV. Since the channel switch is a manual operation, OLM will automatically start the reconciliation. From this point onwards, the upgrade processes are identical. 5.2. ClusterServiceVersion Reconciliation When the OLM detects an approved InstallPlan , it begins the process of reconciling the CSVs. Broadly, it does this by updating the operator resources based on the new spec, verifying the new CSV installs correctly, then deleting the old CSV. The upgrade process will push updates to the operator Deployments, which will trigger the restart of the operator Pods using the images specified in the new CSV. Note While it is possible to make changes to a given CSV and have those changes propagate to the relevant resource, when upgrading to a new CSV all custom changes will be lost, as the new CSV will be created based on its unaltered spec. 5.3. Operator Reconciliation At this point, the reconciliation of the OpenShift Data Foundation operands proceeds as defined in the OpenShift Data Foundation installation overview . The operators will ensure that all relevant resources exist in their expected configurations as specified in the user-facing resources (for example, StorageCluster ).
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_upgrade_overview
Monitoring APIs
Monitoring APIs OpenShift Container Platform 4.12 Reference guide for monitoring APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/monitoring_apis/index
Chapter 17. KubeAPIServer [operator.openshift.io/v1]
Chapter 17. KubeAPIServer [operator.openshift.io/v1] Description KubeAPIServer provides information to configure an operator to manage kube-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Kubernetes API Server status object status is the most recently observed status of the Kubernetes API Server 17.1.1. .spec Description spec is the specification of the desired behavior of the Kubernetes API Server Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 17.1.2. .status Description status is the most recently observed status of the Kubernetes API Server Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state serviceAccountIssuers array serviceAccountIssuers tracks history of used service account issuers. The item without expiration time represents the currently used service account issuer. The other items represents service account issuers that were used previously and are still being trusted. The default expiration for the items is set by the platform and it defaults to 24h. see: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection serviceAccountIssuers[] object version string version is the level this availability applies to 17.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 17.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 17.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 17.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 17.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 17.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 17.1.9. .status.serviceAccountIssuers Description serviceAccountIssuers tracks history of used service account issuers. The item without expiration time represents the currently used service account issuer. The other items represents service account issuers that were used previously and are still being trusted. The default expiration for the items is set by the platform and it defaults to 24h. see: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection Type array 17.1.10. .status.serviceAccountIssuers[] Description Type object Property Type Description expirationTime string expirationTime is the time after which this service account issuer will be pruned and removed from the trusted list of service account issuers. name string name is the name of the service account issuer --- 17.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubeapiservers DELETE : delete collection of KubeAPIServer GET : list objects of kind KubeAPIServer POST : create a KubeAPIServer /apis/operator.openshift.io/v1/kubeapiservers/{name} DELETE : delete a KubeAPIServer GET : read the specified KubeAPIServer PATCH : partially update the specified KubeAPIServer PUT : replace the specified KubeAPIServer /apis/operator.openshift.io/v1/kubeapiservers/{name}/status GET : read status of the specified KubeAPIServer PATCH : partially update status of the specified KubeAPIServer PUT : replace status of the specified KubeAPIServer 17.2.1. /apis/operator.openshift.io/v1/kubeapiservers Table 17.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of KubeAPIServer Table 17.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 17.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeAPIServer Table 17.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 17.5. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServerList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeAPIServer Table 17.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.7. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.8. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 202 - Accepted KubeAPIServer schema 401 - Unauthorized Empty 17.2.2. /apis/operator.openshift.io/v1/kubeapiservers/{name} Table 17.9. Global path parameters Parameter Type Description name string name of the KubeAPIServer Table 17.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a KubeAPIServer Table 17.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 17.12. Body parameters Parameter Type Description body DeleteOptions schema Table 17.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeAPIServer Table 17.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 17.15. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeAPIServer Table 17.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.17. Body parameters Parameter Type Description body Patch schema Table 17.18. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeAPIServer Table 17.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.20. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.21. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 401 - Unauthorized Empty 17.2.3. /apis/operator.openshift.io/v1/kubeapiservers/{name}/status Table 17.22. Global path parameters Parameter Type Description name string name of the KubeAPIServer Table 17.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified KubeAPIServer Table 17.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 17.25. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeAPIServer Table 17.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.27. Body parameters Parameter Type Description body Patch schema Table 17.28. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeAPIServer Table 17.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.30. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.31. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operator_apis/kubeapiserver-operator-openshift-io-v1
3.15. Starting a Virtual Machine
3.15. Starting a Virtual Machine This Ruby example starts a virtual machine. # Get the reference to the "vms" service: vms_service = connection.system_service.vms_service # Find the virtual machine: vm = vms_service.list(search: 'name=myvm')[0] # Locate the service that manages the virtual machine, as that is where # the action methods are defined: vm_service = vms_service.vm_service(vm.id) # Call the "start" method of the service to start it: vm_service.start # Wait until the virtual machine status is UP: loop do sleep(5) vm = vm_service.get break if vm.status == OvirtSDK4::VmStatus::UP end For more information, see http://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4/VmService:start .
[ "Get the reference to the \"vms\" service: vms_service = connection.system_service.vms_service Find the virtual machine: vm = vms_service.list(search: 'name=myvm')[0] Locate the service that manages the virtual machine, as that is where the action methods are defined: vm_service = vms_service.vm_service(vm.id) Call the \"start\" method of the service to start it: vm_service.start Wait until the virtual machine status is UP: loop do sleep(5) vm = vm_service.get break if vm.status == OvirtSDK4::VmStatus::UP end" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/starting_a_virtual_machine
Chapter 51. Data objects
Chapter 51. Data objects Data objects are the building blocks for the rule assets that you create. Data objects are custom data types implemented as Java objects in specified packages of your project. For example, you might create a Person object with data fields Name , Address , and DateOfBirth to specify personal details for loan application rules. These custom data types determine what data your assets and your decision services are based on. 51.1. Creating data objects The following procedure is a generic overview of creating data objects. It is not specific to a particular business asset. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Data Object . Enter a unique Data Object name and select the Package where you want the data object to be available for other rule assets. Data objects with the same name cannot exist in the same package. In the specified DRL file, you can import a data object from any package. Importing data objects from other packages You can import an existing data object from another package directly into the asset designers like guided rules or guided decision table designers. Select the relevant rule asset within the project and in the asset designer, go to Data Objects New item to select the object to be imported. To make your data object persistable, select the Persistable checkbox. Persistable data objects are able to be stored in a database according to the JPA specification. The default JPA is Hibernate. Click Ok . In the data object designer, click add field to add a field to the object with the attributes Id , Label , and Type . Required attributes are marked with an asterisk (*). Id: Enter the unique ID of the field. Label: (Optional) Enter a label for the field. Type: Enter the data type of the field. List: (Optional) Select this check box to enable the field to hold multiple items for the specified type. Figure 51.1. Add data fields to a data object Click Create to add the new field, or click Create and continue to add the new field and continue adding other fields. Note To edit a field, select the field row and use the general properties on the right side of the screen.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/data-objects-con_guided-rules
4.148. libtiff
4.148. libtiff 4.148.1. RHSA-2012:0468 - Important: libtiff security update Updated libtiff packages that fix two security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The libtiff packages contain a library of functions for manipulating Tagged Image File Format (TIFF) files. Security Fix CVE-2012-1173 Two integer overflow flaws, leading to heap-based buffer overflows, were found in the way libtiff attempted to allocate space for a tile in a TIFF image file. An attacker could use these flaws to create a specially-crafted TIFF file that, when opened, would cause an application linked against libtiff to crash or, possibly, execute arbitrary code. All libtiff users should upgrade to these updated packages, which contain a backported patch to resolve these issues. All running applications linked against libtiff must be restarted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libtiff
Chapter 27. Storage [operator.openshift.io/v1]
Chapter 27. Storage [operator.openshift.io/v1] Description Storage provides a means to configure an operator to manage the cluster storage operator. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 27.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 27.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. vsphereStorageDriver string VSphereStorageDriver indicates the storage driver to use on VSphere clusters. Once this field is set to CSIWithMigrationDriver, it can not be changed. If this is empty, the platform will choose a good default, which may change over time without notice. The current default is CSIWithMigrationDriver and may not be changed. DEPRECATED: This field will be removed in a future release. 27.1.2. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 27.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 27.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 27.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 27.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 27.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/storages DELETE : delete collection of Storage GET : list objects of kind Storage POST : create a Storage /apis/operator.openshift.io/v1/storages/{name} DELETE : delete a Storage GET : read the specified Storage PATCH : partially update the specified Storage PUT : replace the specified Storage /apis/operator.openshift.io/v1/storages/{name}/status GET : read status of the specified Storage PATCH : partially update status of the specified Storage PUT : replace status of the specified Storage 27.2.1. /apis/operator.openshift.io/v1/storages HTTP method DELETE Description delete collection of Storage Table 27.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Storage Table 27.2. HTTP responses HTTP code Reponse body 200 - OK StorageList schema 401 - Unauthorized Empty HTTP method POST Description create a Storage Table 27.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.4. Body parameters Parameter Type Description body Storage schema Table 27.5. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 202 - Accepted Storage schema 401 - Unauthorized Empty 27.2.2. /apis/operator.openshift.io/v1/storages/{name} Table 27.6. Global path parameters Parameter Type Description name string name of the Storage HTTP method DELETE Description delete a Storage Table 27.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 27.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Storage Table 27.9. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Storage Table 27.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.11. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Storage Table 27.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.13. Body parameters Parameter Type Description body Storage schema Table 27.14. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 401 - Unauthorized Empty 27.2.3. /apis/operator.openshift.io/v1/storages/{name}/status Table 27.15. Global path parameters Parameter Type Description name string name of the Storage HTTP method GET Description read status of the specified Storage Table 27.16. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Storage Table 27.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.18. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Storage Table 27.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.20. Body parameters Parameter Type Description body Storage schema Table 27.21. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/storage-operator-openshift-io-v1
Chapter 5. Erasure code pools overview
Chapter 5. Erasure code pools overview Ceph uses replicated pools by default, meaning that Ceph copies every object from a primary OSD node to one or more secondary OSDs. The erasure-coded pools reduce the amount of disk space required to ensure data durability but it is computationally a bit more expensive than replication. Ceph storage strategies involve defining data durability requirements. Data durability means the ability to sustain the loss of one or more OSDs without losing data. Ceph stores data in pools and there are two types of the pools: replicated erasure-coded Erasure coding is a method of storing an object in the Ceph storage cluster durably where the erasure code algorithm breaks the object into data chunks ( k ) and coding chunks ( m ), and stores those chunks in different OSDs. In the event of the failure of an OSD, Ceph retrieves the remaining data ( k ) and coding ( m ) chunks from the other OSDs and the erasure code algorithm restores the object from those chunks. Note Red Hat recommends min_size for erasure-coded pools to be K+1 or more to prevent loss of writes and data. Erasure coding uses storage capacity more efficiently than replication. The n-replication approach maintains n copies of an object (3x by default in Ceph), whereas erasure coding maintains only k + m chunks. For example, 3 data and 2 coding chunks use 1.5x the storage space of the original object. While erasure coding uses less storage overhead than replication, the erasure code algorithm uses more RAM and CPU than replication when it accesses or recovers objects. Erasure coding is advantageous when data storage must be durable and fault tolerant, but do not require fast read performance (for example, cold storage, historical records, and so on). For the mathematical and detailed explanation on how erasure code works in Ceph, see the Ceph Erasure Coding section in the Architecture Guide for Red Hat Ceph Storage 7. Ceph creates a default erasure code profile when initializing a cluster with k=2 and m=2 , This mean that Ceph will spread the object data over three OSDs ( k+m == 4 ) and Ceph can lose one of those OSDs without losing data. To know more about erasure code profiling see the Erasure Code Profiles section. Important Configure only the .rgw.buckets pool as erasure-coded and all other Ceph Object Gateway pools as replicated, otherwise an attempt to create a new bucket fails with the following error: The reason for this is that erasure-coded pools do not support the omap operations and certain Ceph Object Gateway metadata pools require the omap support. 5.1. Creating a sample erasure-coded pool Create an erasure-coded pool and specify the placement groups. The ceph osd pool create command creates an erasure-coded pool with the default profile, unless another profile is specified. Profiles define the redundancy of data by setting two parameters, k , and m . These parameters define the number of chunks a piece of data is split and the number of coding chunks are created. The simplest erasure coded pool is equivalent to RAID5 and requires at least four hosts. You can create an erasure-coded pool with 2+2 profile. Procedure Set the following configuration for an erasure-coded pool on four nodes with 2+2 configuration. Syntax Important This is not needed for an erasure-coded pool in general. Important The async recovery cost is the number of PG log entries behind on the replica and the number of missing objects. The osd_target_pg_log_entries_per_osd is 30000 . Hence, an OSD with a single PG could have 30000 entries. Since the osd_async_recovery_min_cost is a 64-bit integer, set the value of osd_async_recovery_min_cost to 1099511627776 for an EC pool with 2+2 configuration. Note For an EC cluster with four nodes, the value of K+M is 2+2. If a node fails completely, it does not recover as four chunks and only three nodes are available. When you set the value of mon_osd_down_out_subtree_limit to host , during a host down scenario, it prevents the OSDs from marked out, so as to prevent the data from re balancing and the waits until the node is up again. For an erasure-coded pool with a 2+2 configuration, set the profile. Syntax Example Create an erasure-coded pool. Example 32 is the number of placement groups.
[ "set_req_state_err err_no=95 resorting to 500", "ceph config set mon mon_osd_down_out_subtree_limit host ceph config set osd osd_async_recovery_min_cost 1099511627776", "ceph osd erasure-code-profile set ec22 k=2 m=2 crush-failure-domain=host", "ceph osd erasure-code-profile set ec22 k=2 m=2 crush-failure-domain=host Pool : ceph osd pool create test-ec-22 erasure ec22", "ceph osd pool create ecpool 32 32 erasure pool 'ecpool' created echo ABCDEFGHI | rados --pool ecpool put NYAN - rados --pool ecpool get NYAN - ABCDEFGHI" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/edge_guide/erasure-code-pools-overview_edge
Chapter 4. Installing a cluster on Oracle Private Cloud Appliance by using the Agent-based Installer
Chapter 4. Installing a cluster on Oracle Private Cloud Appliance by using the Agent-based Installer You can use the Agent-based Installer to install a cluster on Oracle(R) Private Cloud Appliance, so that you can run cluster workloads on on-premise infrastructure while still using Oracle(R) Cloud Infrastructure (OCI) services. 4.1. Installation process workflow The following workflow describes a high-level outline for the process of installing an OpenShift Container Platform cluster on Private Cloud Appliance using the Agent-based Installer: Create Private Cloud Appliance resources and services (Oracle). Prepare configuration files for the Agent-based Installer (Red Hat). Generate the agent ISO image (Red Hat). Convert the ISO image to an Oracle Cloud Infrastructure (OCI) image, upload it to an OCI Home Region Bucket, and then import the uploaded image to the Private Cloud Appliance system (Oracle). Disconnected environments: Prepare a web server that is accessible by OCI instances (Red Hat). Disconnected environments: Upload the rootfs image to the web server (Red Hat). Configure your firewall for OpenShift Container Platform (Red Hat). Create control plane nodes and configure load balancers (Oracle). Create compute nodes and configure load balancers (Oracle). Verify that your cluster runs on OCI (Oracle). 4.2. Creating Oracle Private Cloud Appliance infrastructure resources and services You must create an Private Cloud Appliance environment on your virtual machine (VM) or bare-metal shape. By creating this environment, you can install OpenShift Container Platform and deploy a cluster on an infrastructure that supports a wide range of cloud options and strong security policies. Having prior knowledge of OCI components can help you with understanding the concept of OCI resources and how you can configure them to meet your organizational needs. Important To ensure compatibility with OpenShift Container Platform, you must set A as the record type for each DNS record and name records as follows: api.<cluster_name>.<base_domain> , which targets the apiVIP parameter of the API load balancer api-int.<cluster_name>.<base_domain> , which targets the apiVIP parameter of the API load balancer *.apps.<cluster_name>.<base_domain> , which targets the ingressVIP parameter of the Ingress load balancer The api.* and api-int.* DNS records relate to control plane machines, so you must ensure that all nodes in your installed OpenShift Container Platform cluster can access these DNS records. Prerequisites You configured an OCI account to host the OpenShift Container Platform cluster. See "Access and Considerations" in OpenShift Cluster Setup with Agent Based Installer on Private Cloud Appliance (Oracle documentation). Procedure Create the required Private Cloud Appliance resources and services. For more information, see "Terraform Script Execution" in OpenShift Cluster Setup with Agent Based Installer on Private Cloud Appliance (Oracle documentation). Additional resources Learn About Oracle Cloud Basics (Oracle documentation) 4.3. Creating configuration files for installing a cluster on Private Cloud Appliance You must create the install-config.yaml and the agent-config.yaml configuration files so that you can use the Agent-based Installer to generate a bootable ISO image. The Agent-based installation comprises a bootable ISO that has the Assisted discovery agent and the Assisted Service. Both of these components are required to perform the cluster installation, but the latter component runs on only one of the hosts. Note You can also use the Agent-based Installer to generate or accept Zero Touch Provisioning (ZTP) custom resources. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing the method for users. You have read the "Preparing to install with the Agent-based Installer" documentation. You downloaded the Agent-Based Installer and the command-line interface (CLI) from the Red Hat Hybrid Cloud Console . If you are installing in a disconnected environment, you have prepared a mirror registry in your environment and mirrored release images to the registry. Important Check that your openshift-install binary version relates to your local image container registry and not a shared registry, such as Red Hat Quay, by running the following command: USD ./openshift-install version Example output for a shared registry binary ./openshift-install 4.18.0 built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca release image registry.ci.openshift.org/origin/release:4.18ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363 release architecture amd64 You have logged in to the OpenShift Container Platform with administrator privileges. Procedure Create an installation directory to store configuration files in by running the following command: USD mkdir ~/<directory_name> Configure the install-config.yaml configuration file to meet the needs of your organization and save the file in the directory you created. install-config.yaml file that sets an external platform # install-config.yaml apiVersion: v1 baseDomain: <base_domain> 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 network type: OVNKubernetes machineNetwork: - cidr: <ip_address_from_cidr> 2 serviceNetwork: - 172.30.0.0/16 compute: - architecture: amd64 3 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 4 hyperthreading: Enabled name: master replicas: 3 platform: external: platformName: oci 5 cloudControllerManager: External sshKey: <public_ssh_key> 6 pullSecret: '<pull_secret>' 7 # ... 1 The base domain of your cloud provider. 2 The IP address from the virtual cloud network (VCN) that the CIDR allocates to resources and components that operate on your network. 3 4 Depending on your infrastructure, you can select either arm64 or amd64 . 5 Set OCI as the external platform, so that OpenShift Container Platform can integrate with OCI. 6 Specify your SSH public key. 7 The pull secret that you need for authenticate purposes when downloading container images for OpenShift Container Platform components and services, such as Quay.io. See Install OpenShift Container Platform 4 from the Red Hat Hybrid Cloud Console. Create a directory on your local system named openshift . This must be a subdirectory of the installation directory. Important Do not move the install-config.yaml or agent-config.yaml configuration files to the openshift directory. Configure the Oracle custom manifest files. Go to "Prepare the OpenShift Master Images" in OpenShift Cluster Setup with Agent Based Installer on Private Cloud Appliance (Oracle documentation). Copy and paste the oci-ccm.yml , oci-csi.yml , and machineconfig-ccm.yml files into your openshift directory. Edit the oci-ccm.yml and oci-csi.yml files to specify the compartment Oracle(R) Cloud Identifier (OCID), VCN OCID, subnet OCID from the load balancer, the security lists OCID, and the c3-cert.pem section. Configure the agent-config.yaml configuration file to meet your organization's requirements. Sample agent-config.yaml file for an IPv4 network. apiVersion: v1beta1 metadata: name: <cluster_name> 1 namespace: <cluster_namespace> 2 rendezvousIP: <ip_address_from_CIDR> 3 bootArtifactsBaseURL: <server_URL> 4 # ... 1 The cluster name that you specified in your DNS record. 2 The namespace of your cluster on OpenShift Container Platform. 3 If you use IPv4 as the network IP address format, ensure that you set the rendezvousIP parameter to an IPv4 address that the VCN's Classless Inter-Domain Routing (CIDR) method allocates on your network. Also ensure that at least one instance from the pool of instances that you booted with the ISO matches the IP address value you set for the rendezvousIP parameter. 4 The URL of the server where you want to upload the rootfs image. This parameter is required only for disconnected environments. Generate a minimal ISO image, which excludes the rootfs image, by entering the following command in your installation directory: USD ./openshift-install agent create image --log-level debug The command also completes the following actions: Creates a subdirectory, ./<installation_directory>/auth directory: , and places kubeadmin-password and kubeconfig files in the subdirectory. Creates a rendezvousIP file based on the IP address that you specified in the agent-config.yaml configuration file. Optional: Any modifications you made to agent-config.yaml and install-config.yaml configuration files get imported to the Zero Touch Provisioning (ZTP) custom resources. Important The Agent-based Installer uses Red Hat Enterprise Linux CoreOS (RHCOS). The rootfs image, which is mentioned in a later step, is required for booting, recovering, and repairing your operating system. Disconnected environments only: Upload the rootfs image to a web server. Go to the ./<installation_directory>/boot-artifacts directory that was generated when you created the minimal ISO image. Use your preferred web server, such as any Hypertext Transfer Protocol daemon ( httpd ), to upload the rootfs image to the location specified in the bootArtifactsBaseURL parameter of the agent-config.yaml file. For example, if the bootArtifactsBaseURL parameter states http://192.168.122.20 , you would upload the generated rootfs image to this location so that the Agent-based installer can access the image from http://192.168.122.20/agent.x86_64-rootfs.img . After the Agent-based installer boots the minimal ISO for the external platform, the Agent-based Installer downloads the rootfs image from the http://192.168.122.20/agent.x86_64-rootfs.img location into the system memory. Note The Agent-based Installer also adds the value of the bootArtifactsBaseURL to the minimal ISO Image's configuration, so that when the Operator boots a cluster's node, the Agent-based Installer downloads the rootfs image into system memory. Important Consider that the full ISO image, which is in excess of 1 GB, includes the rootfs image. The image is larger than the minimal ISO Image, which is typically less than 150 MB. Additional resources About OpenShift Container Platform installation Selecting a cluster installation type Preparing to install with the Agent-based Installer Downloading the Agent-based Installer Creating a mirror registry with mirror registry for Red Hat OpenShift Mirroring the OpenShift Container Platform image repository Optional: Using ZTP manifests 4.4. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Set the following registry URLs for your firewall's allowlist: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Set your firewall's allowlist to include any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that offer the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to find the exact endpoints to allow for the regions that you use. AWS aws.amazon.com 443 Used to install and manage clusters in an AWS environment. *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must include the following URLs in your allowlist: 443 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to find the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. *.cloudfront.net 443 Used to provide access to CloudFront. If you use the AWS Security Token Service (STS) and the private S3 bucket, you must provide access to CloudFront. GCP *.googleapis.com 443 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to find the endpoints to allow for your APIs. accounts.google.com 443 Required to access your GCP account. Microsoft Azure management.azure.com 443 Required to access Microsoft Azure services and resources. Review the Microsoft Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. *.blob.core.windows.net 443 Required to download Ignition files. login.microsoftonline.com 443 Required to access Microsoft Azure services and resources. Review the Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function *.apps.<cluster_name>.<base_domain> 443 Required to access the default cluster routes unless you set an ingress wildcard during installation. api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. console.redhat.com 443 Required for your cluster token. mirror.openshift.com 443 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. quayio-production-s3.s3.amazonaws.com 443 Required to access Quay image content in AWS. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com storage.googleapis.com/openshift-release 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> canary-openshift-ingress-canary.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall. 4.5. Running a cluster on Private Cloud Appliance To run a cluster on Oracle(R) Private Cloud Appliance, you must first convert your generated Agent ISO image into an OCI image, upload it to an OCI Home Region Bucket, and then import the uploaded image to the Private Cloud Appliance system. Note Private Cloud Appliance supports the following OpenShift Container Platform cluster topologies: Installing an OpenShift Container Platform cluster on a single node. A highly available cluster that has a minimum of three control plane instances and two compute instances. A compact three-node cluster that has a minimum of three control plane instances. Prerequisites You generated an Agent ISO image. See the "Creating configuration files for installing a cluster on Private Cloud Appliance" section. Procedure Convert the agent ISO image to an OCI image, upload it to an OCI Home Region Bucket, and then import the uploaded image to the Private Cloud Appliance system. See "Prepare the OpenShift Master Images" in OpenShift Cluster Setup with Agent Based Installer on Private Cloud Appliance (Oracle documentation) for instructions. Create control plane instances on Private Cloud Appliance. See "Create control plane instances on PCA and Master Node LB Backend Sets" in OpenShift Cluster Setup with Agent Based Installer on Private Cloud Appliance (Oracle documentation) for instructions. Create a compute instance from the supplied base image for your cluster topology. See "Add worker nodes" in OpenShift Cluster Setup with Agent Based Installer on Private Cloud Appliance (Oracle documentation) for instructions. Important Before you create the compute instance, check that you have enough memory and disk resources for your cluster. Additionally, ensure that at least one compute instance has the same IP address as the address stated under rendezvousIP in the agent-config.yaml file. 4.6. Verifying that your Agent-based cluster installation runs on Private Cloud Appliance Verify that your cluster was installed and is running effectively on Private Cloud Appliance. Prerequisites You created all the required Oracle(R) Private Cloud Appliance resources and services. See the "Creating Oracle Private Cloud Appliance infrastructure resources and services" section. You created install-config.yaml and agent-config.yaml configuration files. See the "Creating configuration files for installing a cluster on Private Cloud Appliance" section. You uploaded the agent ISO image to a default Oracle Object Storage bucket, and you created a compute instance on Private Cloud Appliance. For more information, see "Running a cluster on Private Cloud Appliance". Procedure After you deploy the compute instance on a self-managed node in your OpenShift Container Platform cluster, you can monitor the cluster's status by choosing one of the following options: From the OpenShift Container Platform CLI, enter the following command: USD ./openshift-install agent wait-for install-complete --log-level debug Check the status of the rendezvous host node that runs the bootstrap node. After the host reboots, the host forms part of the cluster. Use the kubeconfig API to check the status of various OpenShift Container Platform components. For the KUBECONFIG environment variable, set the relative path of the cluster's kubeconfig configuration file: USD export KUBECONFIG=~/auth/kubeconfig Check the status of each of the cluster's self-managed nodes. CCM applies a label to each node to designate the node as running in a cluster on OCI. USD oc get nodes -A Output example NAME STATUS ROLES AGE VERSION main-0.private.agenttest.oraclevcn.com Ready control-plane, master 7m v1.27.4+6eeca63 main-1.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f main-2.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f Check the status of each of the cluster's Operators, with the CCM Operator status being a good indicator that your cluster is running. USD oc get co Truncated output example NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.18.0-0 True False False 6m18s baremetal 4.18.0-0 True False False 2m42s network 4.18.0-0 True True False 5m58s Progressing: ... ... 4.7. Additional resources Gathering log data from a failed Agent-based installation Adding worker nodes to an on-premise cluster
[ "./openshift-install version", "./openshift-install 4.18.0 built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca release image registry.ci.openshift.org/origin/release:4.18ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363 release architecture amd64", "mkdir ~/<directory_name>", "install-config.yaml apiVersion: v1 baseDomain: <base_domain> 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 network type: OVNKubernetes machineNetwork: - cidr: <ip_address_from_cidr> 2 serviceNetwork: - 172.30.0.0/16 compute: - architecture: amd64 3 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 4 hyperthreading: Enabled name: master replicas: 3 platform: external: platformName: oci 5 cloudControllerManager: External sshKey: <public_ssh_key> 6 pullSecret: '<pull_secret>' 7", "apiVersion: v1beta1 metadata: name: <cluster_name> 1 namespace: <cluster_namespace> 2 rendezvousIP: <ip_address_from_CIDR> 3 bootArtifactsBaseURL: <server_URL> 4", "./openshift-install agent create image --log-level debug", "./openshift-install agent wait-for install-complete --log-level debug", "export KUBECONFIG=~/auth/kubeconfig", "oc get nodes -A", "NAME STATUS ROLES AGE VERSION main-0.private.agenttest.oraclevcn.com Ready control-plane, master 7m v1.27.4+6eeca63 main-1.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f main-2.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f", "oc get co", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.18.0-0 True False False 6m18s baremetal 4.18.0-0 True False False 2m42s network 4.18.0-0 True True False 5m58s Progressing: ... ..." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_oci/installing-pca-agent-based-installer
Using the AMQ .NET Client
Using the AMQ .NET Client Red Hat AMQ 2020.Q4 For Use with AMQ Clients 2.8
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_.net_client/index
24.6.4. Retrieving Performance Data over SNMP
24.6.4. Retrieving Performance Data over SNMP The Net-SNMP Agent in Red Hat Enterprise Linux provides a wide variety of performance information over the SNMP protocol. In addition, the agent can be queried for a listing of the installed RPM packages on the system, a listing of currently running processes on the system, or the network configuration of the system. This section provides an overview of OIDs related to performance tuning available over SNMP. It assumes that the net-snmp-utils package is installed and that the user is granted access to the SNMP tree as described in Section 24.6.3.2, "Configuring Authentication" . 24.6.4.1. Hardware Configuration The Host Resources MIB included with Net-SNMP presents information about the current hardware and software configuration of a host to a client utility. Table 24.3, "Available OIDs" summarizes the different OIDs available under that MIB. Table 24.3. Available OIDs OID Description HOST-RESOURCES-MIB::hrSystem Contains general system information such as uptime, number of users, and number of running processes. HOST-RESOURCES-MIB::hrStorage Contains data on memory and file system usage. HOST-RESOURCES-MIB::hrDevices Contains a listing of all processors, network devices, and file systems. HOST-RESOURCES-MIB::hrSWRun Contains a listing of all running processes. HOST-RESOURCES-MIB::hrSWRunPerf Contains memory and CPU statistics on the process table from HOST-RESOURCES-MIB::hrSWRun. HOST-RESOURCES-MIB::hrSWInstalled Contains a listing of the RPM database. There are also a number of SNMP tables available in the Host Resources MIB which can be used to retrieve a summary of the available information. The following example displays HOST-RESOURCES-MIB::hrFSTable : For more information about HOST-RESOURCES-MIB , see the /usr/share/snmp/mibs/HOST-RESOURCES-MIB.txt file. 24.6.4.2. CPU and Memory Information Most system performance data is available in the UCD SNMP MIB . The systemStats OID provides a number of counters around processor usage: In particular, the ssCpuRawUser , ssCpuRawSystem , ssCpuRawWait , and ssCpuRawIdle OIDs provide counters which are helpful when determining whether a system is spending most of its processor time in kernel space, user space, or I/O. ssRawSwapIn and ssRawSwapOut can be helpful when determining whether a system is suffering from memory exhaustion. More memory information is available under the UCD-SNMP-MIB::memory OID, which provides similar data to the free command: Load averages are also available in the UCD SNMP MIB . The SNMP table UCD-SNMP-MIB::laTable has a listing of the 1, 5, and 15 minute load averages: 24.6.4.3. File System and Disk Information The Host Resources MIB provides information on file system size and usage. Each file system (and also each memory pool) has an entry in the HOST-RESOURCES-MIB::hrStorageTable table: The OIDs under HOST-RESOURCES-MIB::hrStorageSize and HOST-RESOURCES-MIB::hrStorageUsed can be used to calculate the remaining capacity of each mounted file system. I/O data is available both in UCD-SNMP-MIB::systemStats ( ssIORawSent.0 and ssIORawRecieved.0 ) and in UCD-DISKIO-MIB::diskIOTable . The latter provides much more granular data. Under this table are OIDs for diskIONReadX and diskIONWrittenX , which provide counters for the number of bytes read from and written to the block device in question since the system boot: 24.6.4.4. Network Information The Interfaces MIB provides information on network devices. IF-MIB::ifTable provides an SNMP table with an entry for each interface on the system, the configuration of the interface, and various packet counters for the interface. The following example shows the first few columns of ifTable on a system with two physical network interfaces: Network traffic is available under the OIDs IF-MIB::ifOutOctets and IF-MIB::ifInOctets . The following SNMP queries will retrieve network traffic for each of the interfaces on this system:
[ "~]USD snmptable -Cb localhost HOST-RESOURCES-MIB::hrFSTable SNMP table: HOST-RESOURCES-MIB::hrFSTable Index MountPoint RemoteMountPoint Type Access Bootable StorageIndex LastFullBackupDate LastPartialBackupDate 1 \"/\" \"\" HOST-RESOURCES-TYPES::hrFSLinuxExt2 readWrite true 31 0-1-1,0:0:0.0 0-1-1,0:0:0.0 5 \"/dev/shm\" \"\" HOST-RESOURCES-TYPES::hrFSOther readWrite false 35 0-1-1,0:0:0.0 0-1-1,0:0:0.0 6 \"/boot\" \"\" HOST-RESOURCES-TYPES::hrFSLinuxExt2 readWrite false 36 0-1-1,0:0:0.0 0-1-1,0:0:0.0", "~]USD snmpwalk localhost UCD-SNMP-MIB::systemStats UCD-SNMP-MIB::ssIndex.0 = INTEGER: 1 UCD-SNMP-MIB::ssErrorName.0 = STRING: systemStats UCD-SNMP-MIB::ssSwapIn.0 = INTEGER: 0 kB UCD-SNMP-MIB::ssSwapOut.0 = INTEGER: 0 kB UCD-SNMP-MIB::ssIOSent.0 = INTEGER: 0 blocks/s UCD-SNMP-MIB::ssIOReceive.0 = INTEGER: 0 blocks/s UCD-SNMP-MIB::ssSysInterrupts.0 = INTEGER: 29 interrupts/s UCD-SNMP-MIB::ssSysContext.0 = INTEGER: 18 switches/s UCD-SNMP-MIB::ssCpuUser.0 = INTEGER: 0 UCD-SNMP-MIB::ssCpuSystem.0 = INTEGER: 0 UCD-SNMP-MIB::ssCpuIdle.0 = INTEGER: 99 UCD-SNMP-MIB::ssCpuRawUser.0 = Counter32: 2278 UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 1395 UCD-SNMP-MIB::ssCpuRawSystem.0 = Counter32: 6826 UCD-SNMP-MIB::ssCpuRawIdle.0 = Counter32: 3383736 UCD-SNMP-MIB::ssCpuRawWait.0 = Counter32: 7629 UCD-SNMP-MIB::ssCpuRawKernel.0 = Counter32: 0 UCD-SNMP-MIB::ssCpuRawInterrupt.0 = Counter32: 434 UCD-SNMP-MIB::ssIORawSent.0 = Counter32: 266770 UCD-SNMP-MIB::ssIORawReceived.0 = Counter32: 427302 UCD-SNMP-MIB::ssRawInterrupts.0 = Counter32: 743442 UCD-SNMP-MIB::ssRawContexts.0 = Counter32: 718557 UCD-SNMP-MIB::ssCpuRawSoftIRQ.0 = Counter32: 128 UCD-SNMP-MIB::ssRawSwapIn.0 = Counter32: 0 UCD-SNMP-MIB::ssRawSwapOut.0 = Counter32: 0", "~]USD snmpwalk localhost UCD-SNMP-MIB::memory UCD-SNMP-MIB::memIndex.0 = INTEGER: 0 UCD-SNMP-MIB::memErrorName.0 = STRING: swap UCD-SNMP-MIB::memTotalSwap.0 = INTEGER: 1023992 kB UCD-SNMP-MIB::memAvailSwap.0 = INTEGER: 1023992 kB UCD-SNMP-MIB::memTotalReal.0 = INTEGER: 1021588 kB UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 634260 kB UCD-SNMP-MIB::memTotalFree.0 = INTEGER: 1658252 kB UCD-SNMP-MIB::memMinimumSwap.0 = INTEGER: 16000 kB UCD-SNMP-MIB::memBuffer.0 = INTEGER: 30760 kB UCD-SNMP-MIB::memCached.0 = INTEGER: 216200 kB UCD-SNMP-MIB::memSwapError.0 = INTEGER: noError(0) UCD-SNMP-MIB::memSwapErrorMsg.0 = STRING:", "~]USD snmptable localhost UCD-SNMP-MIB::laTable SNMP table: UCD-SNMP-MIB::laTable laIndex laNames laLoad laConfig laLoadInt laLoadFloat laErrorFlag laErrMessage 1 Load-1 0.00 12.00 0 0.000000 noError 2 Load-5 0.00 12.00 0 0.000000 noError 3 Load-15 0.00 12.00 0 0.000000 noError", "~]USD snmptable -Cb localhost HOST-RESOURCES-MIB::hrStorageTable SNMP table: HOST-RESOURCES-MIB::hrStorageTable Index Type Descr AllocationUnits Size Used AllocationFailures 1 HOST-RESOURCES-TYPES::hrStorageRam Physical memory 1024 Bytes 1021588 388064 ? 3 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Virtual memory 1024 Bytes 2045580 388064 ? 6 HOST-RESOURCES-TYPES::hrStorageOther Memory buffers 1024 Bytes 1021588 31048 ? 7 HOST-RESOURCES-TYPES::hrStorageOther Cached memory 1024 Bytes 216604 216604 ? 10 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Swap space 1024 Bytes 1023992 0 ? 31 HOST-RESOURCES-TYPES::hrStorageFixedDisk / 4096 Bytes 2277614 250391 ? 35 HOST-RESOURCES-TYPES::hrStorageFixedDisk /dev/shm 4096 Bytes 127698 0 ? 36 HOST-RESOURCES-TYPES::hrStorageFixedDisk /boot 1024 Bytes 198337 26694 ?", "~]USD snmptable -Cb localhost UCD-DISKIO-MIB::diskIOTable SNMP table: UCD-DISKIO-MIB::diskIOTable Index Device NRead NWritten Reads Writes LA1 LA5 LA15 NReadX NWrittenX 25 sda 216886272 139109376 16409 4894 ? ? ? 216886272 139109376 26 sda1 2455552 5120 613 2 ? ? ? 2455552 5120 27 sda2 1486848 0 332 0 ? ? ? 1486848 0 28 sda3 212321280 139104256 15312 4871 ? ? ? 212321280 139104256", "~]USD snmptable -Cb localhost IF-MIB::ifTable SNMP table: IF-MIB::ifTable Index Descr Type Mtu Speed PhysAddress AdminStatus 1 lo softwareLoopback 16436 10000000 up 2 eth0 ethernetCsmacd 1500 0 52:54:0:c7:69:58 up 3 eth1 ethernetCsmacd 1500 0 52:54:0:a7:a3:24 down", "~]USD snmpwalk localhost IF-MIB::ifDescr IF-MIB::ifDescr.1 = STRING: lo IF-MIB::ifDescr.2 = STRING: eth0 IF-MIB::ifDescr.3 = STRING: eth1 ~]USD snmpwalk localhost IF-MIB::ifOutOctets IF-MIB::ifOutOctets.1 = Counter32: 10060699 IF-MIB::ifOutOctets.2 = Counter32: 650 IF-MIB::ifOutOctets.3 = Counter32: 0 ~]USD snmpwalk localhost IF-MIB::ifInOctets IF-MIB::ifInOctets.1 = Counter32: 10060699 IF-MIB::ifInOctets.2 = Counter32: 78650 IF-MIB::ifInOctets.3 = Counter32: 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-System_Monitoring_Tools-Net-SNMP-Retrieving
Chapter 3. Creating applications
Chapter 3. Creating applications 3.1. Using templates The following sections provide an overview of templates, as well as how to use and create them. 3.1.1. Understanding templates A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template. You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console. 3.1.2. Uploading a template If you have a JSON or YAML file that defines a template, you can upload the template to projects using the CLI. This saves the template to the project for repeated use by any user with appropriate access to that project. Instructions about writing your own templates are provided later in this topic. Procedure Upload a template using one of the following methods: Upload a template to your current project's template library, pass the JSON or YAML file with the following command: USD oc create -f <filename> Upload a template to a different project using the -n option with the name of the project: USD oc create -f <filename> -n <project> The template is now available for selection using the web console or the CLI. 3.1.3. Creating an application by using the web console You can use the web console to create an application from a template. Procedure Select Developer from the context selector at the top of the web console navigation menu. While in the desired project, click +Add Click All services in the Developer Catalog tile. Click Builder Images under Type to see the available builder images. Note Only image stream tags that have the builder tag listed in their annotations appear in this list, as demonstrated here: kind: "ImageStream" apiVersion: "v1" metadata: name: "ruby" creationTimestamp: null spec: # ... tags: - name: "2.6" annotations: description: "Build and run Ruby 2.6 applications" iconClass: "icon-ruby" tags: "builder,ruby" 1 supports: "ruby:2.6,ruby" version: "2.6" # ... 1 Including builder here ensures this image stream tag appears in the web console as a builder. Modify the settings in the new application screen to configure the objects to support your application. 3.1.4. Creating objects from templates by using the CLI You can use the CLI to process templates and use the configuration that is generated to create objects. 3.1.4.1. Adding labels Labels are used to manage and organize generated objects, such as pods. The labels specified in the template are applied to every object that is generated from the template. Procedure Add labels in the template from the command line: USD oc process -f <filename> -l name=otherLabel 3.1.4.2. Listing parameters The list of parameters that you can override are listed in the parameters section of the template. Procedure You can list parameters with the CLI by using the following command and specifying the file to be used: USD oc process --parameters -f <filename> Alternatively, if the template is already uploaded: USD oc process --parameters -n <project> <template_name> For example, the following shows the output when listing the parameters for one of the quick start templates in the default openshift project: USD oc process --parameters -n openshift rails-postgresql-example Example output NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB The output identifies several parameters that are generated with a regular expression-like generator when the template is processed. 3.1.4.3. Generating a list of objects Using the CLI, you can process a file defining a template to return the list of objects to standard output. Procedure Process a file defining a template to return the list of objects to standard output: USD oc process -f <filename> Alternatively, if the template has already been uploaded to the current project: USD oc process <template_name> Create objects from a template by processing the template and piping the output to oc create : USD oc process -f <filename> | oc create -f - Alternatively, if the template has already been uploaded to the current project: USD oc process <template> | oc create -f - You can override any parameter values defined in the file by adding the -p option for each <name>=<value> pair you want to override. A parameter reference appears in any text field inside the template items. For example, in the following the POSTGRESQL_USER and POSTGRESQL_DATABASE parameters of a template are overridden to output a configuration with customized environment variables: Creating a List of objects from a template USD oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase The JSON file can either be redirected to a file or applied directly without uploading the template by piping the processed output to the oc create command: USD oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase \ | oc create -f - If you have large number of parameters, you can store them in a file and then pass this file to oc process : USD cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase USD oc process -f my-rails-postgresql --param-file=postgres.env You can also read the environment from standard input by using "-" as the argument to --param-file : USD sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=- 3.1.5. Modifying uploaded templates You can edit a template that has already been uploaded to your project. Procedure Modify a template that has already been uploaded: USD oc edit template <template> 3.1.6. Using instant app and quick start templates OpenShift Container Platform provides a number of default instant app and quick start templates to make it easy to quickly get started creating a new application for different languages. Templates are provided for Rails (Ruby), Django (Python), Node.js, CakePHP (PHP), and Dancer (Perl). Your cluster administrator must create these templates in the default, global openshift project so you have access to them. By default, the templates build using a public source repository on GitHub that contains the necessary application code. Procedure You can list the available default instant app and quick start templates with: USD oc get templates -n openshift To modify the source and build your own version of the application: Fork the repository referenced by the template's default SOURCE_REPOSITORY_URL parameter. Override the value of the SOURCE_REPOSITORY_URL parameter when creating from the template, specifying your fork instead of the default value. By doing this, the build configuration created by the template now points to your fork of the application code, and you can modify the code and rebuild the application at will. Note Some of the instant app and quick start templates define a database deployment configuration. The configuration they define uses ephemeral storage for the database content. These templates should be used for demonstration purposes only as all database data is lost if the database pod restarts for any reason. 3.1.6.1. Quick start templates A quick start template is a basic example of an application running on OpenShift Container Platform. Quick starts come in a variety of languages and frameworks, and are defined in a template, which is constructed from a set of services, build configurations, and deployment configurations. This template references the necessary images and source repositories to build and deploy the application. To explore a quick start, create an application from a template. Your administrator must have already installed these templates in your OpenShift Container Platform cluster, in which case you can simply select it from the web console. Quick starts refer to a source repository that contains the application source code. To customize the quick start, fork the repository and, when creating an application from the template, substitute the default source repository name with your forked repository. This results in builds that are performed using your source code instead of the provided example source. You can then update the code in your source repository and launch a new build to see the changes reflected in the deployed application. 3.1.6.1.1. Web framework quick start templates These quick start templates provide a basic application of the indicated framework and language: CakePHP: a PHP web framework that includes a MySQL database Dancer: a Perl web framework that includes a MySQL database Django: a Python web framework that includes a PostgreSQL database NodeJS: a NodeJS web application that includes a MongoDB database Rails: a Ruby web framework that includes a PostgreSQL database 3.1.7. Writing templates You can define new templates to make it easy to recreate all the objects of your application. The template defines the objects it creates along with some metadata to guide the creation of those objects. The following is an example of a simple template object definition (YAML): apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: "Description" iconClass: "icon-redis" tags: "database,nosql" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master 3.1.7.1. Writing the template description The template description informs you what the template does and helps you find it when searching in the web console. Additional metadata beyond the template name is optional, but useful to have. In addition to general descriptive information, the metadata also includes a set of tags. Useful tags include the name of the language the template is related to for example, Java, PHP, Ruby, and so on. The following is an example of template description metadata: kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: "CakePHP MySQL Example (Ephemeral)" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing." 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: "quickstart,php,cakephp" 5 iconClass: icon-php 6 openshift.io/provider-display-name: "Red Hat, Inc." 7 openshift.io/documentation-url: "https://github.com/sclorg/cakephp-ex" 8 openshift.io/support-url: "https://access.redhat.com" 9 message: "Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}" 10 1 The unique name of the template. 2 A brief, user-friendly name, which can be employed by user interfaces. 3 A description of the template. Include enough detail that users understand what is being deployed and any caveats they must know before deploying. It should also provide links to additional information, such as a README file. Newlines can be included to create paragraphs. 4 Additional template description. This may be displayed by the service catalog, for example. 5 Tags to be associated with the template for searching and grouping. Add tags that include it into one of the provided catalog categories. Refer to the id and categoryAliases in CATALOG_CATEGORIES in the console constants file. The categories can also be customized for the whole cluster. 6 An icon to be displayed with your template in the web console. Example 3.1. Available icons icon-3scale icon-aerogear icon-amq icon-angularjs icon-ansible icon-apache icon-beaker icon-camel icon-capedwarf icon-cassandra icon-catalog-icon icon-clojure icon-codeigniter icon-cordova icon-datagrid icon-datavirt icon-debian icon-decisionserver icon-django icon-dotnet icon-drupal icon-eap icon-elastic icon-erlang icon-fedora icon-freebsd icon-git icon-github icon-gitlab icon-glassfish icon-go-gopher icon-golang icon-grails icon-hadoop icon-haproxy icon-helm icon-infinispan icon-jboss icon-jenkins icon-jetty icon-joomla icon-jruby icon-js icon-knative icon-kubevirt icon-laravel icon-load-balancer icon-mariadb icon-mediawiki icon-memcached icon-mongodb icon-mssql icon-mysql-database icon-nginx icon-nodejs icon-openjdk icon-openliberty icon-openshift icon-openstack icon-other-linux icon-other-unknown icon-perl icon-phalcon icon-php icon-play iconpostgresql icon-processserver icon-python icon-quarkus icon-rabbitmq icon-rails icon-redhat icon-redis icon-rh-integration icon-rh-spring-boot icon-rh-tomcat icon-ruby icon-scala icon-serverlessfx icon-shadowman icon-spring-boot icon-spring icon-sso icon-stackoverflow icon-suse icon-symfony icon-tomcat icon-ubuntu icon-vertx icon-wildfly icon-windows icon-wordpress icon-xamarin icon-zend 7 The name of the person or organization providing the template. 8 A URL referencing further documentation for the template. 9 A URL where support can be obtained for the template. 10 An instructional message that is displayed when this template is instantiated. This field should inform the user how to use the newly created resources. Parameter substitution is performed on the message before being displayed so that generated credentials and other parameters can be included in the output. Include links to any -steps documentation that users should follow. 3.1.7.2. Writing template labels Templates can include a set of labels. These labels are added to each object created when the template is instantiated. Defining a label in this way makes it easy for users to find and manage all the objects created from a particular template. The following is an example of template object labels: kind: "Template" apiVersion: "v1" ... labels: template: "cakephp-mysql-example" 1 app: "USD{NAME}" 2 1 A label that is applied to all objects created from this template. 2 A parameterized label that is also applied to all objects created from this template. Parameter expansion is carried out on both label keys and values. 3.1.7.3. Writing template parameters Parameters allow a value to be supplied by you or generated when the template is instantiated. Then, that value is substituted wherever the parameter is referenced. References can be defined in any field in the objects list field. This is useful for generating random passwords or allowing you to supply a hostname or other user-specific value that is required to customize the template. Parameters can be referenced in two ways: As a string value by placing values in the form USD{PARAMETER_NAME} in any string field in the template. As a JSON or YAML value by placing values in the form USD{{PARAMETER_NAME}} in place of any field in the template. When using the USD{PARAMETER_NAME} syntax, multiple parameter references can be combined in a single field and the reference can be embedded within fixed data, such as "http://USD{PARAMETER_1}USD{PARAMETER_2}" . Both parameter values are substituted and the resulting value is a quoted string. When using the USD{{PARAMETER_NAME}} syntax only a single parameter reference is allowed and leading and trailing characters are not permitted. The resulting value is unquoted unless, after substitution is performed, the result is not a valid JSON object. If the result is not a valid JSON value, the resulting value is quoted and treated as a standard string. A single parameter can be referenced multiple times within a template and it can be referenced using both substitution syntaxes within a single template. A default value can be provided, which is used if you do not supply a different value: The following is an example of setting an explicit value as the default value: parameters: - name: USERNAME description: "The user name for Joe" value: joe Parameter values can also be generated based on rules specified in the parameter definition, for example generating a parameter value: parameters: - name: PASSWORD description: "The random user password" generate: expression from: "[a-zA-Z0-9]{12}" In the example, processing generates a random password 12 characters long consisting of all upper and lowercase alphabet letters and numbers. The syntax available is not a full regular expression syntax. However, you can use \w , \d , \a , and \A modifiers: [\w]{10} produces 10 alphabet characters, numbers, and underscores. This follows the PCRE standard and is equal to [a-zA-Z0-9_]{10} . [\d]{10} produces 10 numbers. This is equal to [0-9]{10} . [\a]{10} produces 10 alphabetical characters. This is equal to [a-zA-Z]{10} . [\A]{10} produces 10 punctuation or symbol characters. This is equal to [~!@#USD%\^&*()\-_+={}\[\]\\|<,>.?/"';:`]{10} . Note Depending on if the template is written in YAML or JSON, and the type of string that the modifier is embedded within, you might need to escape the backslash with a second backslash. The following examples are equivalent: Example YAML template with a modifier parameters: - name: singlequoted_example generate: expression from: '[\A]{10}' - name: doublequoted_example generate: expression from: "[\\A]{10}" Example JSON template with a modifier { "parameters": [ { "name": "json_example", "generate": "expression", "from": "[\\A]{10}" } ] } Here is an example of a full template with parameter definitions and references: kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: "USD{SOURCE_REPOSITORY_URL}" 1 ref: "USD{SOURCE_REPOSITORY_REF}" contextDir: "USD{CONTEXT_DIR}" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: "USD{{REPLICA_COUNT}}" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: "[a-zA-Z0-9]{40}" 9 - name: REPLICA_COUNT description: Number of replicas to run value: "2" required: true message: "... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ..." 10 1 This value is replaced with the value of the SOURCE_REPOSITORY_URL parameter when the template is instantiated. 2 This value is replaced with the unquoted value of the REPLICA_COUNT parameter when the template is instantiated. 3 The name of the parameter. This value is used to reference the parameter within the template. 4 The user-friendly name for the parameter. This is displayed to users. 5 A description of the parameter. Provide more detailed information for the purpose of the parameter, including any constraints on the expected value. Descriptions should use complete sentences to follow the console's text standards. Do not make this a duplicate of the display name. 6 A default value for the parameter which is used if you do not override the value when instantiating the template. Avoid using default values for things like passwords, instead use generated parameters in combination with secrets. 7 Indicates this parameter is required, meaning you cannot override it with an empty value. If the parameter does not provide a default or generated value, you must supply a value. 8 A parameter which has its value generated. 9 The input to the generator. In this case, the generator produces a 40 character alphanumeric value including upper and lowercase characters. 10 Parameters can be included in the template message. This informs you about generated values. 3.1.7.4. Writing the template object list The main portion of the template is the list of objects which is created when the template is instantiated. This can be any valid API object, such as a build configuration, deployment configuration, or service. The object is created exactly as defined here, with any parameter values substituted in prior to creation. The definition of these objects can reference parameters defined earlier. The following is an example of an object list: kind: "Template" apiVersion: "v1" metadata: name: my-template objects: - kind: "Service" 1 apiVersion: "v1" metadata: name: "cakephp-mysql-example" annotations: description: "Exposes and load balances the application pods" spec: ports: - name: "web" port: 8080 targetPort: 8080 selector: name: "cakephp-mysql-example" 1 The definition of a service, which is created by this template. Note If an object definition metadata includes a fixed namespace field value, the field is stripped out of the definition during template instantiation. If the namespace field contains a parameter reference, normal parameter substitution is performed and the object is created in whatever namespace the parameter substitution resolved the value to, assuming the user has permission to create objects in that namespace. 3.1.7.5. Marking a template as bindable The Template Service Broker advertises one service in its catalog for each template object of which it is aware. By default, each of these services is advertised as being bindable, meaning an end user is permitted to bind against the provisioned service. Procedure Template authors can prevent end users from binding against services provisioned from a given template. Prevent end user from binding against services provisioned from a given template by adding the annotation template.openshift.io/bindable: "false" to the template. 3.1.7.6. Exposing template object fields Template authors can indicate that fields of particular objects in a template should be exposed. The Template Service Broker recognizes exposed fields on ConfigMap , Secret , Service , and Route objects, and returns the values of the exposed fields when a user binds a service backed by the broker. To expose one or more fields of an object, add annotations prefixed by template.openshift.io/expose- or template.openshift.io/base64-expose- to the object in the template. Each annotation key, with its prefix removed, is passed through to become a key in a bind response. Each annotation value is a Kubernetes JSONPath expression, which is resolved at bind time to indicate the object field whose value should be returned in the bind response. Note Bind response key-value pairs can be used in other parts of the system as environment variables. Therefore, it is recommended that every annotation key with its prefix removed should be a valid environment variable name - beginning with a character A-Z , a-z , or _ , and being followed by zero or more characters A-Z , a-z , 0-9 , or _ . Note Unless escaped with a backslash, Kubernetes' JSONPath implementation interprets characters such as . , @ , and others as metacharacters, regardless of their position in the expression. Therefore, for example, to refer to a ConfigMap datum named my.key , the required JSONPath expression would be {.data['my\.key']} . Depending on how the JSONPath expression is then written in YAML, an additional backslash might be required, for example "{.data['my\\.key']}" . The following is an example of different objects' fields being exposed: kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: "{.data['my\\.username']}" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: "{.data['password']}" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: "{.spec.clusterIP}:{.spec.ports[?(.name==\"web\")].port}" spec: ports: - name: "web" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: "http://{.spec.host}{.spec.path}" spec: path: mypath An example response to a bind operation given the above partial template follows: { "credentials": { "username": "foo", "password": "YmFy", "service_ip_port": "172.30.12.34:8080", "uri": "http://route-test.router.default.svc.cluster.local/mypath" } } Procedure Use the template.openshift.io/expose- annotation to return the field value as a string. This is convenient, although it does not handle arbitrary binary data. If you want to return binary data, use the template.openshift.io/base64-expose- annotation instead to base64 encode the data before it is returned. 3.1.7.7. Waiting for template readiness Template authors can indicate that certain objects within a template should be waited for before a template instantiation by the service catalog, Template Service Broker, or TemplateInstance API is considered complete. To use this feature, mark one or more objects of kind Build , BuildConfig , Deployment , DeploymentConfig , Job , or StatefulSet in a template with the following annotation: "template.alpha.openshift.io/wait-for-ready": "true" Template instantiation is not complete until all objects marked with the annotation report ready. Similarly, if any of the annotated objects report failed, or if the template fails to become ready within a fixed timeout of one hour, the template instantiation fails. For the purposes of instantiation, readiness and failure of each object kind are defined as follows: Kind Readiness Failure Build Object reports phase complete. Object reports phase canceled, error, or failed. BuildConfig Latest associated build object reports phase complete. Latest associated build object reports phase canceled, error, or failed. Deployment Object reports new replica set and deployment available. This honors readiness probes defined on the object. Object reports progressing condition as false. DeploymentConfig Object reports new replication controller and deployment available. This honors readiness probes defined on the object. Object reports progressing condition as false. Job Object reports completion. Object reports that one or more failures have occurred. StatefulSet Object reports all replicas ready. This honors readiness probes defined on the object. Not applicable. The following is an example template extract, which uses the wait-for-ready annotation. Further examples can be found in the OpenShift Container Platform quick start templates. kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: ... annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: ... annotations: template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: Service apiVersion: v1 metadata: name: ... spec: ... Additional recommendations Set memory, CPU, and storage default sizes to make sure your application is given enough resources to run smoothly. Avoid referencing the latest tag from images if that tag is used across major versions. This can cause running applications to break when new images are pushed to that tag. A good template builds and deploys cleanly without requiring modifications after the template is deployed. 3.1.7.8. Creating a template from existing objects Rather than writing an entire template from scratch, you can export existing objects from your project in YAML form, and then modify the YAML from there by adding parameters and other customizations as template form. Procedure Export objects in a project in YAML form: USD oc get -o yaml all > <yaml_filename> You can also substitute a particular resource type or multiple resources instead of all . Run oc get -h for more examples. The object types included in oc get -o yaml all are: BuildConfig Build DeploymentConfig ImageStream Pod ReplicationController Route Service Note Using the all alias is not recommended because the contents might vary across different clusters and versions. Instead, specify all required resources. 3.2. Creating applications by using the Developer perspective The Developer perspective in the web console provides you the following options from the +Add view to create applications and associated services and deploy them on OpenShift Container Platform: Getting started resources : Use these resources to help you get started with Developer Console. You can choose to hide the header using the Options menu . Creating applications using samples : Use existing code samples to get started with creating applications on the OpenShift Container Platform. Build with guided documentation : Follow the guided documentation to build applications and familiarize yourself with key concepts and terminologies. Explore new developer features : Explore the new features and resources within the Developer perspective. Developer catalog : Explore the Developer Catalog to select the required applications, services, or source to image builders, and then add it to your project. All Services : Browse the catalog to discover services across OpenShift Container Platform. Database : Select the required database service and add it to your application. Operator Backed : Select and deploy the required Operator-managed service. Helm chart : Select the required Helm chart to simplify deployment of applications and services. Devfile : Select a devfile from the Devfile registry to declaratively define a development environment. Event Source : Select an event source to register interest in a class of events from a particular system. Note The Managed services option is also available if the RHOAS Operator is installed. Git repository : Import an existing codebase, Devfile, or Dockerfile from your Git repository using the From Git , From Devfile , or From Dockerfile options respectively, to build and deploy an application on OpenShift Container Platform. Container images : Use existing images from an image stream or registry to deploy it on to the OpenShift Container Platform. Pipelines : Use Tekton pipeline to create CI/CD pipelines for your software delivery process on the OpenShift Container Platform. Serverless : Explore the Serverless options to create, build, and deploy stateless and serverless applications on the OpenShift Container Platform. Channel : Create a Knative channel to create an event forwarding and persistence layer with in-memory and reliable implementations. Samples : Explore the available sample applications to create, build, and deploy an application quickly. Quick Starts : Explore the quick start options to create, import, and run applications with step-by-step instructions and tasks. From Local Machine : Explore the From Local Machine tile to import or upload files on your local machine for building and deploying applications easily. Import YAML : Upload a YAML file to create and define resources for building and deploying applications. Upload JAR file : Upload a JAR file to build and deploy Java applications. Share my Project : Use this option to add or remove users to a project and provide accessibility options to them. Helm Chart repositories : Use this option to add Helm Chart repositories in a namespace. Re-ordering of resources : Use these resources to re-order pinned resources added to your navigation pane. The drag-and-drop icon is displayed on the left side of the pinned resource when you hover over it in the navigation pane. The dragged resource can be dropped only in the section where it resides. Note that certain options, such as Pipelines , Event Source , and Import Virtual Machines , are displayed only when the OpenShift Pipelines Operator , OpenShift Serverless Operator , and OpenShift Virtualization Operator are installed, respectively. 3.2.1. Prerequisites To create applications using the Developer perspective ensure that: You have logged in to the web console . You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. To create serverless applications, in addition to the preceding prerequisites, ensure that: You have installed the OpenShift Serverless Operator . You have created a KnativeServing resource in the knative-serving namespace . 3.2.2. Creating sample applications You can use the sample applications in the +Add flow of the Developer perspective to create, build, and deploy applications quickly. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the Samples tile to see the Samples page. On the Samples page, select one of the available sample applications to see the Create Sample Application form. In the Create Sample Application Form : In the Name field, the deployment name is displayed by default. You can modify this name as required. In the Builder Image Version , a builder image is selected by default. You can modify this image version by using the Builder Image Version drop-down list. A sample Git repository URL is added by default. Click Create to create the sample application. The build status of the sample application is displayed on the Topology view. After the sample application is created, you can see the deployment added to the application. 3.2.3. Creating applications by using Quick Starts The Quick Starts page shows you how to create, import, and run applications on OpenShift Container Platform, with step-by-step instructions and tasks. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the Getting Started resources Build with guided documentation View all quick starts link to view the Quick Starts page. In the Quick Starts page, click the tile for the quick start that you want to use. Click Start to begin the quick start. Perform the steps that are displayed. 3.2.4. Importing a codebase from Git to create an application You can use the Developer perspective to create, build, and deploy an application on OpenShift Container Platform using an existing codebase in GitHub. The following procedure walks you through the From Git option in the Developer perspective to create an application. Procedure In the +Add view, click From Git in the Git Repository tile to see the Import from git form. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application https://github.com/sclorg/nodejs-ex . The URL is then validated. Optional: You can click Show Advanced Git Options to add details such as: Git Reference to point to code in a specific branch, tag, or commit to be used to build the application. Context Dir to specify the subdirectory for the application source code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. Optional: You can import a Devfile , a Dockerfile , Builder Image , or a Serverless Function through your Git repository to further customize your deployment. If your Git repository contains a Devfile , a Dockerfile , a Builder Image , or a func.yaml , it is automatically detected and populated on the respective path fields. If a Devfile , a Dockerfile , or a Builder Image are detected in the same repository, the Devfile is selected by default. If func.yaml is detected in the Git repository, the Import Strategy changes to Serverless Function . Alternatively, you can create a serverless function by clicking Create Serverless function in the +Add view using the Git repository URL. To edit the file import type and select a different strategy, click Edit import strategy option. If multiple Devfiles , a Dockerfiles , or a Builder Images are detected, to import a specific instance, specify the respective paths relative to the context directory. After the Git URL is validated, the recommended builder image is selected and marked with a star. If the builder image is not auto-detected, select a builder image. For the https://github.com/sclorg/nodejs-ex Git URL, by default the Node.js builder image is selected. Optional: Use the Builder Image Version drop-down to specify a version. Optional: Use the Edit import strategy to select a different strategy. Optional: For the Node.js builder image, use the Run command field to override the command to run the application. In the General section: In the Application field, enter a unique name for the application grouping, for example, myapp . Ensure that the application name is unique in a namespace. The Name field to identify the resources created for this application is automatically populated based on the Git repository URL if there are no existing applications. If there are existing applications, you can choose to deploy the component within an existing application, create a new application, or keep the component unassigned. Note The resource name must be unique in a namespace. Modify the resource name if you get an error. In the Resources section, select: Deployment , to create an application in plain Kubernetes style. Deployment Config , to create an OpenShift Container Platform style application. Serverless Deployment , to create a Knative service. Note To set the default resource preference for importing an application, go to User Preferences Applications Resource type field. The Serverless Deployment option is displayed in the Import from Git form only if the OpenShift Serverless Operator is installed in your cluster. The Resources section is not available while creating a serverless function. For further details, refer to the OpenShift Serverless documentation. In the Pipelines section, select Add Pipeline , and then click Show Pipeline Visualization to see the pipeline for the application. A default pipeline is selected, but you can choose the pipeline you want from the list of available pipelines for the application. Note The Add pipeline checkbox is checked and Configure PAC is selected by default if the following criterias are fulfilled: Pipeline operator is installed pipelines-as-code is enabled .tekton directory is detected in the Git repository Add a webhook to your repository. If Configure PAC is checked and the GitHub App is set up, you can see the Use GitHub App and Setup a webhook options. If GitHub App is not set up, you can only see the Setup a webhook option: Go to Settings Webhooks and click Add webhook . Set the Payload URL to the Pipelines as Code controller public URL. Select the content type as application/json . Add a webhook secret and note it in an alternate location. With openssl installed on your local machine, generate a random secret. Click Let me select individual events and select these events: Commit comments , Issue comments , Pull request , and Pushes . Click Add webhook . Optional: In the Advanced Options section, the Target port and the Create a route to the application is selected by default so that you can access your application using a publicly available URL. If your application does not expose its data on the default public port, 80, clear the check box, and set the target port number you want to expose. Optional: You can use the following advanced options to further customize your application: Routing By clicking the Routing link, you can perform the following actions: Customize the hostname for the route. Specify the path the router watches. Select the target port for the traffic from the drop-down list. Secure your route by selecting the Secure Route check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists. Note For serverless applications, the Knative service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of 8080 is used. Domain mapping If you are creating a Serverless Deployment , you can add a custom domain mapping to the Knative service during creation. In the Advanced options section, click Show advanced Routing options . If the domain mapping CR that you want to map to the service already exists, you can select it from the Domain mapping drop-down menu. If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in example.com , the Create option is Create "example.com" . Health Checks Click the Health Checks link to add Readiness, Liveness, and Startup probes to your application. All the probes have prepopulated default data; you can add the probes with the default data or customize it as required. To customize the health probes: Click Add Readiness Probe , if required, modify the parameters to check if the container is ready to handle requests, and select the check mark to add the probe. Click Add Liveness Probe , if required, modify the parameters to check if a container is still running, and select the check mark to add the probe. Click Add Startup Probe , if required, modify the parameters to check if the application within the container has started, and select the check mark to add the probe. For each of the probes, you can specify the request type - HTTP GET , Container Command , or TCP Socket , from the drop-down list. The form changes as per the selected request type. You can then modify the default values for the other parameters, such as the success and failure thresholds for the probe, number of seconds before performing the first probe after the container starts, frequency of the probe, and the timeout value. Build Configuration and Deployment Click the Build Configuration and Deployment links to see the respective configuration options. Some options are selected by default; you can customize them further by adding the necessary triggers and environment variables. For serverless applications, the Deployment option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a DeploymentConfig resource. Scaling Click the Scaling link to define the number of pods or instances of the application you want to deploy initially. If you are creating a serverless deployment, you can also configure the following settings: Min Pods determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the minScale setting. Max Pods determines the upper limit for the number of pods that can be running at any given time for a Knative service. This is also known as the maxScale setting. Concurrency target determines the number of concurrent requests desired for each instance of the application at a given time. Concurrency limit determines the limit for the number of concurrent requests allowed for each instance of the application at a given time. Concurrency utilization determines the percentage of the concurrent requests limit that must be met before Knative scales up additional pods to handle additional traffic. Autoscale window defines the time window over which metrics are averaged to provide input for scaling decisions when the autoscaler is not in panic mode. A service is scaled-to-zero if no requests are received during this window. The default duration for the autoscale window is 60s . This is also known as the stable window. Resource Limit Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running. Labels Click the Labels link to add custom labels to your application. Click Create to create the application and a success notification is displayed. You can see the build status of the application in the Topology view. 3.2.5. Creating applications by deploying container image You can use an external image registry or an image stream tag from an internal registry to deploy an application on your cluster. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click Container images to view the Deploy Images page. In the Image section: Select Image name from external registry to deploy an image from a public or a private registry, or select Image stream tag from internal registry to deploy an image from an internal registry. Select an icon for your image in the Runtime icon tab. In the General section: In the Application name field, enter a unique name for the application grouping. In the Name field, enter a unique name to identify the resources created for this component. In the Resource type section, select the resource type to generate: Select Deployment to enable declarative updates for Pod and ReplicaSet objects. Select DeploymentConfig to define the template for a Pod object, and manage deploying new images and configuration sources. Select Serverless Deployment to enable scaling to zero when idle. Click Create . You can view the build status of the application in the Topology view. 3.2.6. Deploying a Java application by uploading a JAR file You can use the web console Developer perspective to upload a JAR file by using the following options: Navigate to the +Add view of the Developer perspective, and click Upload JAR file in the From Local Machine tile. Browse and select your JAR file, or drag a JAR file to deploy your application. Navigate to the Topology view and use the Upload JAR file option, or drag a JAR file to deploy your application. Use the in-context menu in the Topology view, and then use the Upload JAR file option to upload your JAR file to deploy your application. Prerequisites The Cluster Samples Operator must be installed by a cluster administrator. You have access to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the Topology view, right-click anywhere to view the Add to Project menu. Hover over the Add to Project menu to see the menu options, and then select the Upload JAR file option to see the Upload JAR file form. Alternatively, you can drag the JAR file into the Topology view. In the JAR file field, browse for the required JAR file on your local machine and upload it. Alternatively, you can drag the JAR file on to the field. A toast alert is displayed at the top right if an incompatible file type is dragged into the Topology view. A field error is displayed if an incompatible file type is dropped on the field in the upload form. The runtime icon and builder image are selected by default. If a builder image is not auto-detected, select a builder image. If required, you can change the version using the Builder Image Version drop-down list. Optional: In the Application Name field, enter a unique name for your application to use for resource labelling. In the Name field, enter a unique component name for the associated resources. Optional: Use the Resource type drop-down list to change the resource type. In the Advanced options menu, click Create a Route to the Application to configure a public URL for your deployed application. Click Create to deploy the application. A toast notification is shown to notify you that the JAR file is being uploaded. The toast notification also includes a link to view the build logs. Note If you attempt to close the browser tab while the build is running, a web alert is displayed. After the JAR file is uploaded and the application is deployed, you can view the application in the Topology view. 3.2.7. Using the Devfile registry to access devfiles You can use the devfiles in the +Add flow of the Developer perspective to create an application. The +Add flow provides a complete integration with the devfile community registry . A devfile is a portable YAML file that describes your development environment without needing to configure it from scratch. Using the Devfile registry , you can use a preconfigured devfile to create an application. Procedure Navigate to Developer Perspective +Add Developer Catalog All Services . A list of all the available services in the Developer Catalog is displayed. Under Type , click Devfiles to browse for devfiles that support a particular language or framework. Alternatively, you can use the keyword filter to search for a particular devfile using their name, tag, or description. Click the devfile you want to use to create an application. The devfile tile displays the details of the devfile, including the name, description, provider, and the documentation of the devfile. Click Create to create an application and view the application in the Topology view. 3.2.8. Using the Developer Catalog to add services or components to your application You use the Developer Catalog to deploy applications and services based on Operator backed services such as Databases, Builder Images, and Helm Charts. The Developer Catalog contains a collection of application components, services, event sources, or source-to-image builders that you can add to your project. Cluster administrators can customize the content made available in the catalog. Procedure In the Developer perspective, navigate to the +Add view and from the Developer Catalog tile, click All Services to view all the available services in the Developer Catalog . Under All Services , select the kind of service or the component you need to add to your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service. Click Instantiate Template to see an automatically populated template with details for the MariaDB service, and then click Create to create and view the MariaDB service in the Topology view. Figure 3.1. MariaDB in Topology 3.2.9. Additional resources For more information about Knative routing settings for OpenShift Serverless, see Routing . For more information about domain mapping settings for OpenShift Serverless, see Configuring a custom domain for a Knative service . For more information about Knative autoscaling settings for OpenShift Serverless, see Autoscaling . For more information about adding a new user to a project, see Working with projects . For more information about creating a Helm Chart repository, see Creating Helm Chart repositories . 3.3. Creating applications from installed Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on OpenShift Container Platform using Operators that have been installed by a cluster administrator. This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. Additional resources See the Operators guide for more on how Operators work and how the Operator Lifecycle Manager is integrated in OpenShift Container Platform. 3.3.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.15 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.4. Creating applications by using the CLI You can create an OpenShift Container Platform application from components that include source or binary code, images, and templates by using the OpenShift Container Platform CLI. The set of objects created by new-app depends on the artifacts passed as input: source repositories, images, or templates. 3.4.1. Creating an application from source code With the new-app command you can create applications from source code in a local or remote Git repository. The new-app command creates a build configuration, which itself creates a new application image from your source code. The new-app command typically also creates a Deployment object to deploy the new image, and a service to provide load-balanced access to the deployment running your image. OpenShift Container Platform automatically detects whether the pipeline, source, or docker build strategy should be used, and in the case of source build, detects an appropriate language builder image. 3.4.1.1. Local To create an application from a Git repository in a local directory: USD oc new-app /<path to source code> Note If you use a local Git repository, the repository must have a remote named origin that points to a URL that is accessible by the OpenShift Container Platform cluster. If there is no recognized remote, running the new-app command will create a binary build. 3.4.1.2. Remote To create an application from a remote Git repository: USD oc new-app https://github.com/sclorg/cakephp-ex To create an application from a private remote Git repository: USD oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret Note If you use a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your build config to access the repository. You can use a subdirectory of your source code repository by specifying a --context-dir flag. To create an application from a remote Git repository and a context subdirectory: USD oc new-app https://github.com/sclorg/s2i-ruby-container.git \ --context-dir=2.0/test/puma-test-app Also, when specifying a remote URL, you can specify a Git branch to use by appending #<branch_name> to the end of the URL: USD oc new-app https://github.com/openshift/ruby-hello-world.git#beta4 3.4.1.3. Build strategy detection OpenShift Container Platform automatically determines which build strategy to use by detecting certain files: If a Jenkins file exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a pipeline build strategy. Note The pipeline build strategy is deprecated; consider using Red Hat OpenShift Pipelines instead. If a Dockerfile exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a docker build strategy. If neither a Jenkins file nor a Dockerfile is detected, OpenShift Container Platform generates a source build strategy. Override the automatically detected build strategy by setting the --strategy flag to docker , pipeline , or source . USD oc new-app /home/user/code/myapp --strategy=docker Note The oc command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use git remote -v . 3.4.1.4. Language detection If you use the source build strategy, new-app attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository: Table 3.1. Languages detected by new-app Language Files dotnet project.json , *.csproj jee pom.xml nodejs app.json , package.json perl cpanfile , index.pl php composer.json , index.php python requirements.txt , setup.py ruby Gemfile , Rakefile , config.ru scala build.sbt golang Godeps , main.go After a language is detected, new-app searches the OpenShift Container Platform server for image stream tags that have a supports annotation matching the detected language, or an image stream that matches the name of the detected language. If a match is not found, new-app searches the Docker Hub registry for an image that matches the detected language based on name. You can override the image the builder uses for a particular source repository by specifying the image, either an image stream or container specification, and the repository with a ~ as a separator. Note that if this is done, build strategy detection and language detection are not carried out. For example, to use the myproject/my-ruby imagestream with the source in a remote repository: USD oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git To use the openshift/ruby-20-centos7:latest container image stream with the source in a local repository: USD oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app Note Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the <image>~<repository> syntax. The -i <image> <repository> invocation requires that new-app attempt to clone repository to determine what type of artifact it is, so this will fail if Git is not available. The -i <image> --code <repository> invocation requires new-app clone repository to determine whether image should be used as a builder for the source code, or deployed separately, as in the case of a database image. 3.4.2. Creating an application from an image You can deploy an application from an existing image. Images can come from image streams in the OpenShift Container Platform server, images in a specific registry, or images in the local Docker server. The new-app command attempts to determine the type of image specified in the arguments passed to it. However, you can explicitly tell new-app whether the image is a container image using the --docker-image argument or an image stream using the -i|--image-stream argument. Note If you specify an image from your local Docker repository, you must ensure that the same image is available to the OpenShift Container Platform cluster nodes. 3.4.2.1. Docker Hub MySQL image Create an application from the Docker Hub MySQL image, for example: USD oc new-app mysql 3.4.2.2. Image in a private registry Create an application using an image in a private registry, specify the full container image specification: USD oc new-app myregistry:5000/example/myimage 3.4.2.3. Existing image stream and optional image stream tag Create an application from an existing image stream and optional image stream tag: USD oc new-app my-stream:v1 3.4.3. Creating an application from a template You can create an application from a previously stored template or from a template file, by specifying the name of the template as an argument. For example, you can store a sample application template and use it to create an application. Upload an application template to your current project's template library. The following example uploads an application template from a file called examples/sample-app/application-template-stibuild.json : USD oc create -f examples/sample-app/application-template-stibuild.json Then create a new application by referencing the application template. In this example, the template name is ruby-helloworld-sample : USD oc new-app ruby-helloworld-sample To create a new application by referencing a template file in your local file system, without first storing it in OpenShift Container Platform, use the -f|--file argument. For example: USD oc new-app -f examples/sample-app/application-template-stibuild.json 3.4.3.1. Template parameters When creating an application based on a template, use the -p|--param argument to set parameter values that are defined by the template: USD oc new-app ruby-helloworld-sample \ -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword You can store your parameters in a file, then use that file with --param-file when instantiating a template. If you want to read the parameters from standard input, use --param-file=- . The following is an example file called helloworld.params : ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword Reference the parameters in the file when instantiating a template: USD oc new-app ruby-helloworld-sample --param-file=helloworld.params 3.4.4. Modifying application creation The new-app command generates OpenShift Container Platform objects that build, deploy, and run the application that is created. Normally, these objects are created in the current project and assigned names that are derived from the input source repositories or the input images. However, with new-app you can modify this behavior. Table 3.2. new-app output objects Object Description BuildConfig A BuildConfig object is created for each source repository that is specified in the command line. The BuildConfig object specifies the strategy to use, the source location, and the build output location. ImageStreams For the BuildConfig object, two image streams are usually created. One represents the input image. With source builds, this is the builder image. With Docker builds, this is the FROM image. The second one represents the output image. If a container image was specified as input to new-app , then an image stream is created for that image as well. DeploymentConfig A DeploymentConfig object is created either to deploy the output of a build, or a specified image. The new-app command creates emptyDir volumes for all Docker volumes that are specified in containers included in the resulting DeploymentConfig object . Service The new-app command attempts to detect exposed ports in input images. It uses the lowest numeric exposed port to generate a service that exposes that port. To expose a different port, after new-app has completed, simply use the oc expose command to generate additional services. Other Other objects can be generated when instantiating templates, according to the template. 3.4.4.1. Specifying environment variables When generating applications from a template, source, or an image, you can use the -e|--env argument to pass environment variables to the application container at run time: USD oc new-app openshift/postgresql-92-centos7 \ -e POSTGRESQL_USER=user \ -e POSTGRESQL_DATABASE=db \ -e POSTGRESQL_PASSWORD=password The variables can also be read from file using the --env-file argument. The following is an example file called postgresql.env : POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password Read the variables from the file: USD oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env Additionally, environment variables can be given on standard input by using --env-file=- : USD cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=- Note Any BuildConfig objects created as part of new-app processing are not updated with environment variables passed with the -e|--env or --env-file argument. 3.4.4.2. Specifying build environment variables When generating applications from a template, source, or an image, you can use the --build-env argument to pass environment variables to the build container at run time: USD oc new-app openshift/ruby-23-centos7 \ --build-env HTTP_PROXY=http://myproxy.net:1337/ \ --build-env GEM_HOME=~/.gem The variables can also be read from a file using the --build-env-file argument. The following is an example file called ruby.env : HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem Read the variables from the file: USD oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env Additionally, environment variables can be given on standard input by using --build-env-file=- : USD cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=- 3.4.4.3. Specifying labels When generating applications from source, images, or templates, you can use the -l|--label argument to add labels to the created objects. Labels make it easy to collectively select, configure, and delete objects associated with the application. USD oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world 3.4.4.4. Viewing the output without creation To see a dry-run of running the new-app command, you can use the -o|--output argument with a yaml or json value. You can then use the output to preview the objects that are created or redirect it to a file that you can edit. After you are satisfied, you can use oc create to create the OpenShift Container Platform objects. To output new-app artifacts to a file, run the following: USD oc new-app https://github.com/openshift/ruby-hello-world \ -o yaml > myapp.yaml Edit the file: USD vi myapp.yaml Create a new application by referencing the file: USD oc create -f myapp.yaml 3.4.4.5. Creating objects with different names Objects created by new-app are normally named after the source repository, or the image used to generate them. You can set the name of the objects produced by adding a --name flag to the command: USD oc new-app https://github.com/openshift/ruby-hello-world --name=myapp 3.4.4.6. Creating objects in a different project Normally, new-app creates objects in the current project. However, you can create objects in a different project by using the -n|--namespace argument: USD oc new-app https://github.com/openshift/ruby-hello-world -n myproject 3.4.4.7. Creating multiple objects The new-app command allows creating multiple applications specifying multiple parameters to new-app . Labels specified in the command line apply to all objects created by the single command. Environment variables apply to all components created from source or images. To create an application from a source repository and a Docker Hub image: USD oc new-app https://github.com/openshift/ruby-hello-world mysql Note If a source code repository and a builder image are specified as separate arguments, new-app uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the ~ separator. 3.4.4.8. Grouping images and source in a single pod The new-app command allows deploying multiple images together in a single pod. To specify which images to group together, use the + separator. The --group command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with other images, specify its builder image in the group: USD oc new-app ruby+mysql To deploy an image built from source and an external image together: USD oc new-app \ ruby~https://github.com/openshift/ruby-hello-world \ mysql \ --group=ruby+mysql 3.4.4.9. Searching for images, templates, and other inputs To search for images, templates, and other inputs for the oc new-app command, add the --search and --list flags. For example, to find all of the images or templates that include PHP: USD oc new-app --search php 3.4.4.10. Setting the import mode To set the import mode when using oc new-app , add the --import-mode flag. This flag can be appended with Legacy or PreserveOriginal , which provides users the option to create image streams using a single sub-manifest, or all manifests, respectively. USD oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test USD oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test 3.5. Creating applications using Ruby on Rails Ruby on Rails is a web framework written in Ruby. This guide covers using Rails 4 on OpenShift Container Platform. Warning Go through the whole tutorial to have an overview of all the steps necessary to run your application on the OpenShift Container Platform. If you experience a problem try reading through the entire tutorial and then going back to your issue. It can also be useful to review your steps to ensure that all the steps were run correctly. 3.5.1. Prerequisites Basic Ruby and Rails knowledge. Locally installed version of Ruby 2.0.0+, Rubygems, Bundler. Basic Git knowledge. Running instance of OpenShift Container Platform 4. Make sure that an instance of OpenShift Container Platform is running and is available. Also make sure that your oc CLI client is installed and the command is accessible from your command shell, so you can use it to log in using your email address and password. 3.5.2. Setting up the database Rails applications are almost always used with a database. For local development use the PostgreSQL database. Procedure Install the database: USD sudo yum install -y postgresql postgresql-server postgresql-devel Initialize the database: USD sudo postgresql-setup initdb This command creates the /var/lib/pgsql/data directory, in which the data is stored. Start the database: USD sudo systemctl start postgresql.service When the database is running, create your rails user: USD sudo -u postgres createuser -s rails Note that the user created has no password. 3.5.3. Writing your application If you are starting your Rails application from scratch, you must install the Rails gem first. Then you can proceed with writing your application. Procedure Install the Rails gem: USD gem install rails Example output Successfully installed rails-4.3.0 1 gem installed After you install the Rails gem, create a new application with PostgreSQL as your database: USD rails new rails-app --database=postgresql Change into your new application directory: USD cd rails-app If you already have an application, make sure the pg (postgresql) gem is present in your Gemfile . If not, edit your Gemfile by adding the gem: gem 'pg' Generate a new Gemfile.lock with all your dependencies: USD bundle install In addition to using the postgresql database with the pg gem, you also must ensure that the config/database.yml is using the postgresql adapter. Make sure you updated default section in the config/database.yml file, so it looks like this: default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password> Create your application's development and test databases: USD rake db:create This creates development and test database in your PostgreSQL server. 3.5.3.1. Creating a welcome page Since Rails 4 no longer serves a static public/index.html page in production, you must create a new root page. To have a custom welcome page must do following steps: Create a controller with an index action. Create a view page for the welcome controller index action. Create a route that serves applications root page with the created controller and view. Rails offers a generator that completes all necessary steps for you. Procedure Run Rails generator: USD rails generate controller welcome index All the necessary files are created. edit line 2 in config/routes.rb file as follows: Run the rails server to verify the page is available: USD rails server You should see your page by visiting http://localhost:3000 in your browser. If you do not see the page, check the logs that are output to your server to debug. 3.5.3.2. Configuring application for OpenShift Container Platform To have your application communicate with the PostgreSQL database service running in OpenShift Container Platform you must edit the default section in your config/database.yml to use environment variables, which you must define later, upon the database service creation. Procedure Edit the default section in your config/database.yml with pre-defined variables as follows: Sample config/database YAML file <% user = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? "root" : ENV["POSTGRESQL_USER"] %> <% password = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? ENV["POSTGRESQL_ADMIN_PASSWORD"] : ENV["POSTGRESQL_PASSWORD"] %> <% db_service = ENV.fetch("DATABASE_SERVICE_NAME","").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV["POSTGRESQL_MAX_CONNECTIONS"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV["#{db_service}_SERVICE_HOST"] %> port: <%= ENV["#{db_service}_SERVICE_PORT"] %> database: <%= ENV["POSTGRESQL_DATABASE"] %> 3.5.3.3. Storing your application in Git Building an application in OpenShift Container Platform usually requires that the source code be stored in a git repository, so you must install git if you do not already have it. Prerequisites Install git. Procedure Make sure you are in your Rails application directory by running the ls -1 command. The output of the command should look like: USD ls -1 Example output app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor Run the following commands in your Rails app directory to initialize and commit your code to git: USD git init USD git add . USD git commit -m "initial commit" After your application is committed you must push it to a remote repository. GitHub account, in which you create a new repository. Set the remote that points to your git repository: USD git remote add origin [email protected]:<namespace/repository-name>.git Push your application to your remote git repository. USD git push 3.5.4. Deploying your application to OpenShift Container Platform You can deploy you application to OpenShift Container Platform. After creating the rails-app project, you are automatically switched to the new project namespace. Deploying your application in OpenShift Container Platform involves three steps: Creating a database service from OpenShift Container Platform's PostgreSQL image. Creating a frontend service from OpenShift Container Platform's Ruby 2.0 builder image and your Ruby on Rails source code, which are wired with the database service. Creating a route for your application. Procedure To deploy your Ruby on Rails application, create a new project for the application: USD oc new-project rails-app --description="My Rails application" --display-name="Rails Application" 3.5.4.1. Creating the database service Your Rails application expects a running database service. For this service use PostgreSQL database image. To create the database service, use the oc new-app command. To this command you must pass some necessary environment variables which are used inside the database container. These environment variables are required to set the username, password, and name of the database. You can change the values of these environment variables to anything you would like. The variables are as follows: POSTGRESQL_DATABASE POSTGRESQL_USER POSTGRESQL_PASSWORD Setting these variables ensures: A database exists with the specified name. A user exists with the specified name. The user can access the specified database with the specified password. Procedure Create the database service: USD oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password To also set the password for the database administrator, append to the command with: -e POSTGRESQL_ADMIN_PASSWORD=admin_pw Watch the progress: USD oc get pods --watch 3.5.4.2. Creating the frontend service To bring your application to OpenShift Container Platform, you must specify a repository in which your application lives. Procedure Create the frontend service and specify database related environment variables that were setup when creating the database service: USD oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql With this command, OpenShift Container Platform fetches the source code, sets up the builder, builds your application image, and deploys the newly created image together with the specified environment variables. The application is named rails-app . Verify the environment variables have been added by viewing the JSON document of the rails-app deployment config: USD oc get dc rails-app -o json You should see the following section: Example output env": [ { "name": "POSTGRESQL_USER", "value": "username" }, { "name": "POSTGRESQL_PASSWORD", "value": "password" }, { "name": "POSTGRESQL_DATABASE", "value": "db_name" }, { "name": "DATABASE_SERVICE_NAME", "value": "postgresql" } ], Check the build process: USD oc logs -f build/rails-app-1 After the build is complete, look at the running pods in OpenShift Container Platform: USD oc get pods You should see a line starting with myapp-<number>-<hash> , and that is your application running in OpenShift Container Platform. Before your application is functional, you must initialize the database by running the database migration script. There are two ways you can do this: Manually from the running frontend container: Exec into frontend container with rsh command: USD oc rsh <frontend_pod_id> Run the migration from inside the container: USD RAILS_ENV=production bundle exec rake db:migrate If you are running your Rails application in a development or test environment you do not have to specify the RAILS_ENV environment variable. By adding pre-deployment lifecycle hooks in your template. 3.5.4.3. Creating a route for your application You can expose a service to create a route for your application. Procedure To expose a service by giving it an externally-reachable hostname like www.example.com use OpenShift Container Platform route. In your case you need to expose the frontend service by typing: USD oc expose service rails-app --hostname=www.example.com Warning Ensure the hostname you specify resolves into the IP address of the router.
[ "oc create -f <filename>", "oc create -f <filename> -n <project>", "kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby\" creationTimestamp: null spec: tags: - name: \"2.6\" annotations: description: \"Build and run Ruby 2.6 applications\" iconClass: \"icon-ruby\" tags: \"builder,ruby\" 1 supports: \"ruby:2.6,ruby\" version: \"2.6\"", "oc process -f <filename> -l name=otherLabel", "oc process --parameters -f <filename>", "oc process --parameters -n <project> <template_name>", "oc process --parameters -n openshift rails-postgresql-example", "NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB", "oc process -f <filename>", "oc process <template_name>", "oc process -f <filename> | oc create -f -", "oc process <template> | oc create -f -", "oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase", "oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase | oc create -f -", "cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase", "oc process -f my-rails-postgresql --param-file=postgres.env", "sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-", "oc edit template <template>", "oc get templates -n openshift", "apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: \"Description\" iconClass: \"icon-redis\" tags: \"database,nosql\" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: \"CakePHP MySQL Example (Ephemeral)\" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing.\" 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: \"quickstart,php,cakephp\" 5 iconClass: icon-php 6 openshift.io/provider-display-name: \"Red Hat, Inc.\" 7 openshift.io/documentation-url: \"https://github.com/sclorg/cakephp-ex\" 8 openshift.io/support-url: \"https://access.redhat.com\" 9 message: \"Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}\" 10", "kind: \"Template\" apiVersion: \"v1\" labels: template: \"cakephp-mysql-example\" 1 app: \"USD{NAME}\" 2", "parameters: - name: USERNAME description: \"The user name for Joe\" value: joe", "parameters: - name: PASSWORD description: \"The random user password\" generate: expression from: \"[a-zA-Z0-9]{12}\"", "parameters: - name: singlequoted_example generate: expression from: '[\\A]{10}' - name: doublequoted_example generate: expression from: \"[\\\\A]{10}\"", "{ \"parameters\": [ { \"name\": \"json_example\", \"generate\": \"expression\", \"from\": \"[\\\\A]{10}\" } ] }", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: \"USD{SOURCE_REPOSITORY_URL}\" 1 ref: \"USD{SOURCE_REPOSITORY_REF}\" contextDir: \"USD{CONTEXT_DIR}\" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: \"USD{{REPLICA_COUNT}}\" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: \"[a-zA-Z0-9]{40}\" 9 - name: REPLICA_COUNT description: Number of replicas to run value: \"2\" required: true message: \"... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ...\" 10", "kind: \"Template\" apiVersion: \"v1\" metadata: name: my-template objects: - kind: \"Service\" 1 apiVersion: \"v1\" metadata: name: \"cakephp-mysql-example\" annotations: description: \"Exposes and load balances the application pods\" spec: ports: - name: \"web\" port: 8080 targetPort: 8080 selector: name: \"cakephp-mysql-example\"", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: \"{.data['my\\\\.username']}\" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: \"{.data['password']}\" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: \"{.spec.clusterIP}:{.spec.ports[?(.name==\\\"web\\\")].port}\" spec: ports: - name: \"web\" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: \"http://{.spec.host}{.spec.path}\" spec: path: mypath", "{ \"credentials\": { \"username\": \"foo\", \"password\": \"YmFy\", \"service_ip_port\": \"172.30.12.34:8080\", \"uri\": \"http://route-test.router.default.svc.cluster.local/mypath\" } }", "\"template.alpha.openshift.io/wait-for-ready\": \"true\"", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: annotations: template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: Service apiVersion: v1 metadata: name: spec:", "oc get -o yaml all > <yaml_filename>", "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc new-app /<path to source code>", "oc new-app https://github.com/sclorg/cakephp-ex", "oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret", "oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app", "oc new-app https://github.com/openshift/ruby-hello-world.git#beta4", "oc new-app /home/user/code/myapp --strategy=docker", "oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git", "oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app", "oc new-app mysql", "oc new-app myregistry:5000/example/myimage", "oc new-app my-stream:v1", "oc create -f examples/sample-app/application-template-stibuild.json", "oc new-app ruby-helloworld-sample", "oc new-app -f examples/sample-app/application-template-stibuild.json", "oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword", "ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword", "oc new-app ruby-helloworld-sample --param-file=helloworld.params", "oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password", "POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password", "oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env", "cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-", "oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem", "HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem", "oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env", "cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-", "oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world", "oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml", "vi myapp.yaml", "oc create -f myapp.yaml", "oc new-app https://github.com/openshift/ruby-hello-world --name=myapp", "oc new-app https://github.com/openshift/ruby-hello-world -n myproject", "oc new-app https://github.com/openshift/ruby-hello-world mysql", "oc new-app ruby+mysql", "oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql", "oc new-app --search php", "oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test", "oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test", "sudo yum install -y postgresql postgresql-server postgresql-devel", "sudo postgresql-setup initdb", "sudo systemctl start postgresql.service", "sudo -u postgres createuser -s rails", "gem install rails", "Successfully installed rails-4.3.0 1 gem installed", "rails new rails-app --database=postgresql", "cd rails-app", "gem 'pg'", "bundle install", "default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>", "rake db:create", "rails generate controller welcome index", "root 'welcome#index'", "rails server", "<% user = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? \"root\" : ENV[\"POSTGRESQL_USER\"] %> <% password = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? ENV[\"POSTGRESQL_ADMIN_PASSWORD\"] : ENV[\"POSTGRESQL_PASSWORD\"] %> <% db_service = ENV.fetch(\"DATABASE_SERVICE_NAME\",\"\").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV[\"POSTGRESQL_MAX_CONNECTIONS\"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV[\"#{db_service}_SERVICE_HOST\"] %> port: <%= ENV[\"#{db_service}_SERVICE_PORT\"] %> database: <%= ENV[\"POSTGRESQL_DATABASE\"] %>", "ls -1", "app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor", "git init", "git add .", "git commit -m \"initial commit\"", "git remote add origin [email protected]:<namespace/repository-name>.git", "git push", "oc new-project rails-app --description=\"My Rails application\" --display-name=\"Rails Application\"", "oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password", "-e POSTGRESQL_ADMIN_PASSWORD=admin_pw", "oc get pods --watch", "oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql", "oc get dc rails-app -o json", "env\": [ { \"name\": \"POSTGRESQL_USER\", \"value\": \"username\" }, { \"name\": \"POSTGRESQL_PASSWORD\", \"value\": \"password\" }, { \"name\": \"POSTGRESQL_DATABASE\", \"value\": \"db_name\" }, { \"name\": \"DATABASE_SERVICE_NAME\", \"value\": \"postgresql\" } ],", "oc logs -f build/rails-app-1", "oc get pods", "oc rsh <frontend_pod_id>", "RAILS_ENV=production bundle exec rake db:migrate", "oc expose service rails-app --hostname=www.example.com" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/creating-applications
Chapter 12. Using a service account as an OAuth client
Chapter 12. Using a service account as an OAuth client 12.1. Service accounts as OAuth clients You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account's own namespace: user:info user:check-access role:<any_role>:<service_account_namespace> role:<any_role>:<service_account_namespace>:! When using a service account as an OAuth client: client_id is system:serviceaccount:<service_account_namespace>:<service_account_name> . client_secret can be any of the API tokens for that service account. For example: USD oc sa get-token <service_account_name> To get WWW-Authenticate challenges, set an serviceaccounts.openshift.io/oauth-want-challenges annotation on the service account to true . redirect_uri must match an annotation on the service account. 12.1.1. Redirect URIs for service accounts as OAuth clients Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi. or serviceaccounts.openshift.io/oauth-redirectreference. such as: In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example: The first and second postfixes in the above example are used to separate the two valid redirect URIs. In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play. For example: Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": "Route", "name": "jenkins" } } Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins . Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference is: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": ..., 1 "name": ..., 2 "group": ... 3 } } 1 kind refers to the type of the object being referenced. Currently, only route is supported. 2 name refers to the name of the object. The object must be in the same namespace as the service account. 3 group refers to the group of the object. Leave this blank, as the group for a route is the empty string. Both annotation prefixes can be combined to override the data provided by the reference object. For example: The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress of https://example.com , now https://example.com/custompath is considered valid, but https://example.com is not. The format for partially supplying override data is as follows: Type Syntax Scheme "https://" Hostname "//website.com" Port "//:8000" Path "examplepath" Note Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior. Any combination of the above syntax can be combined using the following format: <scheme:>//<hostname><:port>/<path> The same object can be referenced more than once for more flexibility: Assuming that the route named jenkins has an Ingress of https://example.com , then both https://example.com:8000 and https://example.com/custompath are considered valid. Static and dynamic annotations can be used at the same time to achieve the desired behavior:
[ "oc sa get-token <service_account_name>", "serviceaccounts.openshift.io/oauth-redirecturi.<name>", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/using-service-accounts-as-oauth-client
Chapter 4. Monitoring Camel Spring Boot integrations
Chapter 4. Monitoring Camel Spring Boot integrations This chapter explains how to monitor integrations on Red Hat build of Camel Spring Boot at runtime. You can use the Prometheus Operator that is already deployed as part of OpenShift Monitoring to monitor your own applications. For more information about deploying the Camel Spring Boot applications on OpenShift Container Platform, see Apache Camel on OCP Best practices . For information about the HawtIO Diagnostic Console, see the HawtIO Diagnostic Console documentation . 4.1. Enabling user workload monitoring in OpenShift You can enable the monitoring for user-defined projects by setting the enableUserWorkload: true field in the cluster monitoring ConfigMap object. Important In OpenShift Container Platform 4.13 you must remove any custom Prometheus instances before enabling monitoring for user-defined projects. Prerequisites You must have access to the cluster as a user with the cluster-admin cluster role access to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects. You have cluster admin access to the OpenShift cluster. You have installed the OpenShift CLI ( oc ). Note Every time you save configuration changes to the user-workload-monitoring-config ConfigMap object, the pods in the openshift-user-workload-monitoring project are redeployed. It can sometimes take a while for these components to redeploy. You can create and configure the ConfigMap object before you first enable monitoring for user-defined projects, to prevent having to redeploy the pods often. Procedure Login to OpenShift with administrator permissions. Edit the cluster-monitoring-config ConfigMap object. Add enableUserWorkload: true in the data/config.yaml section. When it is set to true, the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster. Save the file to apply the changes. The monitoring for the user-defined projects is then enabled automatically. Note When the changes are saved to the cluster-monitoring-config ConfigMap object, the pods and other resources in the openshift-monitoring project might be redeployed. The running monitoring processes in that project might also be restarted. Verify that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are running in the openshift-user-workload-monitoring project. 4.2. Monitoring a Camel Spring Boot application After you enable the monitoring for your project, you can deploy and monitor the Camel Spring Boot application. This section uses the monitoring-micrometrics-grafana-prometheus example listed in the Camel Spring Boot Examples . Procedure Add the openshift-maven-plugin to the pom.xml file of the monitoring-micrometrics-grafana-prometheus example. In the pom.xml , add an openshift profile to allow deployment to openshift through the openshift-maven-plugin. Add the openshift-maven-plugin to the pom.xml file of the monitoring-micrometrics-grafana-prometheus example. In the pom.xml , add an openshift profile to allow deployment to openshift through the openshift-maven-plugin . <profiles> <profile> <id>openshift</id> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.13.1</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> Add the Prometheus support. In order to add the Prometheus support to your Camel application, expose the Prometheus statistics on an actuator endpoint. Edit your src/main/resources/application.properties file. Add a management.endpoints.web.exposure.include entry if it doesn't exist. Add prometheus, metrics, and health to the management.endpoints.web.exposure.include entry: Add the following to the <dependencies/> section of your pom.xml to add some starter support to your application. <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>6.1.8</version> </dependency> <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> <version>1.13.6</version> </dependency> <dependency> <groupId>org.jolokia</groupId> <artifactId>jolokia-core</artifactId> <version>2.1.0</version> </dependency> <dependency> <groupId>io.prometheus.jmx</groupId> <artifactId>collector</artifactId> <version>1.0.1</version> </dependency> Create the file config/prometheus_exporter_config.yml : startDelaySecs: 5 ssl: false blacklistObjectNames: ["java.lang:*"] rules: # Context level - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>ExchangesCompleted' name: org.apache.camel.ExchangesCompleted help: Exchanges Completed type: COUNTER labels: context: USD1 type: context - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>ExchangesFailed' name: org.apache.camel.ExchangesFailed help: Exchanges Failed type: COUNTER labels: context: USD1 type: context - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>ExchangesInflight' name: org.apache.camel.ExchangesInflight help: Exchanges Inflight type: GAUGE labels: context: USD1 type: context - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>ExchangesTotal' name: org.apache.camel.ExchangesTotal help: Exchanges Total type: COUNTER labels: context: USD1 type: context - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>FailuresHandled' name: org.apache.camel.FailuresHandled help: Failures Handled labels: context: USD1 type: context type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>ExternalRedeliveries' name: org.apache.camel.ExternalRedeliveries help: External Redeliveries labels: context: USD1 type: context type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>MaxProcessingTime' name: org.apache.camel.MaxProcessingTime help: Maximum Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>MeanProcessingTime' name: org.apache.camel.MeanProcessingTime help: Mean Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>MinProcessingTime' name: org.apache.camel.MinProcessingTime help: Minimum Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>LastProcessingTime' name: org.apache.camel.LastProcessingTime help: Last Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>DeltaProcessingTime' name: org.apache.camel.DeltaProcessingTime help: Delta Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>Redeliveries' name: org.apache.camel.Redeliveries help: Redeliveries labels: context: USD1 type: context type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>TotalProcessingTime' name: org.apache.camel.TotalProcessingTime help: Total Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=consumers, name=([^,]+)><>InflightExchanges' name: org.apache.camel.InflightExchanges help: Inflight Exchanges labels: context: USD1 type: context type: GAUGE # Route level - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>ExchangesCompleted' name: org.apache.camel.ExchangesCompleted help: Exchanges Completed type: COUNTER labels: context: USD1 route: USD2 type: routes - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>ExchangesFailed' name: org.apache.camel.ExchangesFailed help: Exchanges Failed type: COUNTER labels: context: USD1 route: USD2 type: routes - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>ExchangesInflight' name: org.apache.camel.ExchangesInflight help: Exchanges Inflight type: GAUGE labels: context: USD1 route: USD2 type: routes - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>ExchangesTotal' name: org.apache.camel.ExchangesTotal help: Exchanges Total type: COUNTER labels: context: USD1 route: USD2 type: routes - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>FailuresHandled' name: org.apache.camel.FailuresHandled help: Failures Handled labels: context: USD1 route: USD2 type: routes type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>ExternalRedeliveries' name: org.apache.camel.ExternalRedeliveries help: External Redeliveries labels: context: USD1 route: USD2 type: routes type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>MaxProcessingTime' name: org.apache.camel.MaxProcessingTime help: Maximum Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>MeanProcessingTime' name: org.apache.camel.MeanProcessingTime help: Mean Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>MinProcessingTime' name: org.apache.camel.MinProcessingTime help: Minimum Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>LastProcessingTime' name: org.apache.camel.LastProcessingTime help: Last Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>DeltaProcessingTime' name: org.apache.camel.DeltaProcessingTime help: Delta Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>Redeliveries' name: org.apache.camel.Redeliveries help: Redeliveries labels: context: USD1 route: USD2 type: routes type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>TotalProcessingTime' name: org.apache.camel.TotalProcessingTime help: Total Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>InflightExchanges' name: org.apache.camel.InflightExchanges help: Inflight Exchanges labels: context: USD1 route: USD2 type: routes type: GAUGE # Processor level - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>ExchangesCompleted' name: org.apache.camel.ExchangesCompleted help: Exchanges Completed type: COUNTER labels: context: USD1 processor: USD2 type: processors - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>ExchangesFailed' name: org.apache.camel.ExchangesFailed help: Exchanges Failed type: COUNTER labels: context: USD1 processor: USD2 type: processors - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>ExchangesInflight' name: org.apache.camel.ExchangesInflight help: Exchanges Inflight type: GAUGE labels: context: USD1 processor: USD2 type: processors - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>ExchangesTotal' name: org.apache.camel.ExchangesTotal help: Exchanges Total type: COUNTER labels: context: USD1 processor: USD2 type: processors - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>FailuresHandled' name: org.apache.camel.FailuresHandled help: Failures Handled labels: context: USD1 processor: USD2 type: processors type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>ExternalRedeliveries' name: org.apache.camel.ExternalRedeliveries help: External Redeliveries labels: context: USD1 processor: USD2 type: processors type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>MaxProcessingTime' name: org.apache.camel.MaxProcessingTime help: Maximum Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>MeanProcessingTime' name: org.apache.camel.MeanProcessingTime help: Mean Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>MinProcessingTime' name: org.apache.camel.MinProcessingTime help: Minimum Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>LastProcessingTime' name: org.apache.camel.LastProcessingTime help: Last Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>DeltaProcessingTime' name: org.apache.camel.DeltaProcessingTime help: Delta Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>Redeliveries' name: org.apache.camel.Redeliveries help: Redeliveries labels: context: USD1 processor: USD2 type: processors type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>TotalProcessingTime' name: org.apache.camel.TotalProcessingTime help: Total Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>InflightExchanges' name: org.apache.camel.InflightExchanges help: Inflight Exchanges labels: context: USD1 processor: USD2 type: processors type: COUNTER # Consumers - pattern: 'org.apache.camel<context=([^,]+), type=consumers, name=([^,]+)><>InflightExchanges' name: org.apache.camel.InflightExchanges help: Inflight Exchanges labels: context: USD1 consumer: USD2 type: consumers type: GAUGE # Services - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>MaxDuration' name: org.apache.camel.MaxDuration help: Maximum Duration labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>MeanDuration' name: org.apache.camel.MeanDuration help: Mean Duration labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>MinDuration' name: org.apache.camel.MinDuration help: Minimum Duration labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>TotalDuration' name: org.apache.camel.TotalDuration help: Total Duration labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>ThreadsBlocked' name: org.apache.camel.ThreadsBlocked help: Threads Blocked labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>ThreadsInterrupted' name: org.apache.camel.ThreadsInterrupted help: Threads Interrupted labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>NumLogicalRuntimeFaults' name: org.apache.cxf.NumLogicalRuntimeFaults help: Number of logical runtime faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>NumLogicalRuntimeFaults' name: org.apache.cxf.NumLogicalRuntimeFaults help: Number of logical runtime faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>AvgResponseTime' name: org.apache.cxf.AvgResponseTime help: Average Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>AvgResponseTime' name: org.apache.cxf.AvgResponseTime help: Average Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>NumInvocations' name: org.apache.cxf.NumInvocations help: Number of invocations type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>NumInvocations' name: org.apache.cxf.NumInvocations help: Number of invocations type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>MaxResponseTime' name: org.apache.cxf.MaxResponseTime help: Maximum Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>MaxResponseTime' name: org.apache.cxf.MaxResponseTime help: Maximum Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>MinResponseTime' name: org.apache.cxf.MinResponseTime help: Minimum Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>MinResponseTime' name: org.apache.cxf.MinResponseTime help: Minimum Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>TotalHandlingTime' name: org.apache.cxf.TotalHandlingTime help: Total Handling Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>TotalHandlingTime' name: org.apache.cxf.TotalHandlingTime help: Total Handling Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>NumRuntimeFaults' name: org.apache.cxf.NumRuntimeFaults help: Number of runtime faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>NumRuntimeFaults' name: org.apache.cxf.NumRuntimeFaults help: Number of runtime faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>NumUnCheckedApplicationFaults' name: org.apache.cxf.NumUnCheckedApplicationFaults help: Number of unchecked application faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>NumUnCheckedApplicationFaults' name: org.apache.cxf.NumUnCheckedApplicationFaults help: Number of unchecked application faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>NumCheckedApplicationFaults' name: org.apache.cxf.NumCheckedApplicationFaults help: Number of checked application faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>NumCheckedApplicationFaults' name: org.apache.cxf.NumCheckedApplicationFaults help: Number of checked application faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 Add the following to the Application.java of your Camel application. import java.io.InputStream; import io.micrometer.core.instrument.Clock; import org.apache.camel.CamelContext; import org.apache.camel.spring.boot.CamelContextConfiguration; import org.springframework.context.annotation.Bean; import org.apache.camel.component.micrometer.MicrometerConstants; import org.apache.camel.component.micrometer.eventnotifier.MicrometerExchangeEventNotifier; import org.apache.camel.component.micrometer.eventnotifier.MicrometerRouteEventNotifier; import org.apache.camel.component.micrometer.messagehistory.MicrometerMessageHistoryFactory; import org.apache.camel.component.micrometer.routepolicy.MicrometerRoutePolicyFactory; The updated Application.java is shown below. @SpringBootApplication public class SampleCamelApplication { @Bean(name = {MicrometerConstants.METRICS_REGISTRY_NAME, "prometheusMeterRegistry"}) public PrometheusMeterRegistry prometheusMeterRegistry( PrometheusConfig prometheusConfig, CollectorRegistry collectorRegistry, Clock clock) throws MalformedObjectNameException, IOException { InputStream resource = new ClassPathResource("config/prometheus_exporter_config.yml").getInputStream(); new JmxCollector(resource).register(collectorRegistry); new BuildInfoCollector().register(collectorRegistry); return new PrometheusMeterRegistry(prometheusConfig, collectorRegistry, clock); } @Bean public CamelContextConfiguration camelContextConfiguration(@Autowired PrometheusMeterRegistry registry) { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext camelContext) { MicrometerRoutePolicyFactory micrometerRoutePolicyFactory = new MicrometerRoutePolicyFactory(); micrometerRoutePolicyFactory.setMeterRegistry(registry); camelContext.addRoutePolicyFactory(micrometerRoutePolicyFactory); MicrometerMessageHistoryFactory micrometerMessageHistoryFactory = new MicrometerMessageHistoryFactory(); micrometerMessageHistoryFactory.setMeterRegistry(registry); camelContext.setMessageHistoryFactory(micrometerMessageHistoryFactory); MicrometerExchangeEventNotifier micrometerExchangeEventNotifier = new MicrometerExchangeEventNotifier(); micrometerExchangeEventNotifier.setMeterRegistry(registry); camelContext.getManagementStrategy().addEventNotifier(micrometerExchangeEventNotifier); MicrometerRouteEventNotifier micrometerRouteEventNotifier = new MicrometerRouteEventNotifier(); micrometerRouteEventNotifier.setMeterRegistry(registry); camelContext.getManagementStrategy().addEventNotifier(micrometerRouteEventNotifier); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; } Deploy the application to OpenShift. Verify if your application is deployed. Add the Service Monitor for this application so that Openshift's prometheus instance can start scraping from the / actuator/prometheus endpoint. Create the following YAML manifest for a Service monitor. In this example, the file is named as servicemonitor.yaml . Add a Service Monitor for this application. Verify that the service monitor was successfully deployed. Verify that you can see the service monitor in the list of scrape targets. In the Administrator view, navigate to Observe Targets. You can find csb-demo-monitor within the list of scrape targets. Wait about ten minutes after deploying the servicemonitor. Then navigate to the Observe Metrics in the Developer view. Select Custom query in the drop-down menu and type camel to view the Camel metrics that are exposed through the /actuator/prometheus endpoint. Note Red Hat does not offer support for installing and configuring Prometheus and Grafana on non-OCP environments.
[ "login --user system:admin --token=my-token --server=https://my-cluster.example.com:6443", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true", "oc -n openshift-user-workload-monitoring get pod Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h", "<profiles> <profile> <id>openshift</id> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.13.1</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles>", "expose actuator endpoint via HTTP management.endpoints.web.exposure.include=mappings,metrics,health,shutdown,jolokia,prometheus", "<dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>6.1.8</version> </dependency> <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> <version>1.13.6</version> </dependency> <dependency> <groupId>org.jolokia</groupId> <artifactId>jolokia-core</artifactId> <version>2.1.0</version> </dependency> <dependency> <groupId>io.prometheus.jmx</groupId> <artifactId>collector</artifactId> <version>1.0.1</version> </dependency>", "startDelaySecs: 5 ssl: false blacklistObjectNames: [\"java.lang:*\"] rules: # Context level - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>ExchangesCompleted' name: org.apache.camel.ExchangesCompleted help: Exchanges Completed type: COUNTER labels: context: USD1 type: context - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>ExchangesFailed' name: org.apache.camel.ExchangesFailed help: Exchanges Failed type: COUNTER labels: context: USD1 type: context - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>ExchangesInflight' name: org.apache.camel.ExchangesInflight help: Exchanges Inflight type: GAUGE labels: context: USD1 type: context - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>ExchangesTotal' name: org.apache.camel.ExchangesTotal help: Exchanges Total type: COUNTER labels: context: USD1 type: context - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>FailuresHandled' name: org.apache.camel.FailuresHandled help: Failures Handled labels: context: USD1 type: context type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>ExternalRedeliveries' name: org.apache.camel.ExternalRedeliveries help: External Redeliveries labels: context: USD1 type: context type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>MaxProcessingTime' name: org.apache.camel.MaxProcessingTime help: Maximum Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>MeanProcessingTime' name: org.apache.camel.MeanProcessingTime help: Mean Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>MinProcessingTime' name: org.apache.camel.MinProcessingTime help: Minimum Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>LastProcessingTime' name: org.apache.camel.LastProcessingTime help: Last Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>DeltaProcessingTime' name: org.apache.camel.DeltaProcessingTime help: Delta Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>Redeliveries' name: org.apache.camel.Redeliveries help: Redeliveries labels: context: USD1 type: context type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=context, name=([^,]+)><>TotalProcessingTime' name: org.apache.camel.TotalProcessingTime help: Total Processing Time labels: context: USD1 type: context type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=consumers, name=([^,]+)><>InflightExchanges' name: org.apache.camel.InflightExchanges help: Inflight Exchanges labels: context: USD1 type: context type: GAUGE # Route level - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>ExchangesCompleted' name: org.apache.camel.ExchangesCompleted help: Exchanges Completed type: COUNTER labels: context: USD1 route: USD2 type: routes - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>ExchangesFailed' name: org.apache.camel.ExchangesFailed help: Exchanges Failed type: COUNTER labels: context: USD1 route: USD2 type: routes - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>ExchangesInflight' name: org.apache.camel.ExchangesInflight help: Exchanges Inflight type: GAUGE labels: context: USD1 route: USD2 type: routes - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>ExchangesTotal' name: org.apache.camel.ExchangesTotal help: Exchanges Total type: COUNTER labels: context: USD1 route: USD2 type: routes - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>FailuresHandled' name: org.apache.camel.FailuresHandled help: Failures Handled labels: context: USD1 route: USD2 type: routes type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>ExternalRedeliveries' name: org.apache.camel.ExternalRedeliveries help: External Redeliveries labels: context: USD1 route: USD2 type: routes type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>MaxProcessingTime' name: org.apache.camel.MaxProcessingTime help: Maximum Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>MeanProcessingTime' name: org.apache.camel.MeanProcessingTime help: Mean Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>MinProcessingTime' name: org.apache.camel.MinProcessingTime help: Minimum Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>LastProcessingTime' name: org.apache.camel.LastProcessingTime help: Last Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>DeltaProcessingTime' name: org.apache.camel.DeltaProcessingTime help: Delta Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>Redeliveries' name: org.apache.camel.Redeliveries help: Redeliveries labels: context: USD1 route: USD2 type: routes type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>TotalProcessingTime' name: org.apache.camel.TotalProcessingTime help: Total Processing Time labels: context: USD1 route: USD2 type: routes type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=routes, name=([^,]+)><>InflightExchanges' name: org.apache.camel.InflightExchanges help: Inflight Exchanges labels: context: USD1 route: USD2 type: routes type: GAUGE # Processor level - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>ExchangesCompleted' name: org.apache.camel.ExchangesCompleted help: Exchanges Completed type: COUNTER labels: context: USD1 processor: USD2 type: processors - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>ExchangesFailed' name: org.apache.camel.ExchangesFailed help: Exchanges Failed type: COUNTER labels: context: USD1 processor: USD2 type: processors - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>ExchangesInflight' name: org.apache.camel.ExchangesInflight help: Exchanges Inflight type: GAUGE labels: context: USD1 processor: USD2 type: processors - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>ExchangesTotal' name: org.apache.camel.ExchangesTotal help: Exchanges Total type: COUNTER labels: context: USD1 processor: USD2 type: processors - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>FailuresHandled' name: org.apache.camel.FailuresHandled help: Failures Handled labels: context: USD1 processor: USD2 type: processors type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>ExternalRedeliveries' name: org.apache.camel.ExternalRedeliveries help: External Redeliveries labels: context: USD1 processor: USD2 type: processors type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>MaxProcessingTime' name: org.apache.camel.MaxProcessingTime help: Maximum Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>MeanProcessingTime' name: org.apache.camel.MeanProcessingTime help: Mean Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>MinProcessingTime' name: org.apache.camel.MinProcessingTime help: Minimum Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>LastProcessingTime' name: org.apache.camel.LastProcessingTime help: Last Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>DeltaProcessingTime' name: org.apache.camel.DeltaProcessingTime help: Delta Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>Redeliveries' name: org.apache.camel.Redeliveries help: Redeliveries labels: context: USD1 processor: USD2 type: processors type: COUNTER - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>TotalProcessingTime' name: org.apache.camel.TotalProcessingTime help: Total Processing Time labels: context: USD1 processor: USD2 type: processors type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=processors, name=([^,]+)><>InflightExchanges' name: org.apache.camel.InflightExchanges help: Inflight Exchanges labels: context: USD1 processor: USD2 type: processors type: COUNTER # Consumers - pattern: 'org.apache.camel<context=([^,]+), type=consumers, name=([^,]+)><>InflightExchanges' name: org.apache.camel.InflightExchanges help: Inflight Exchanges labels: context: USD1 consumer: USD2 type: consumers type: GAUGE # Services - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>MaxDuration' name: org.apache.camel.MaxDuration help: Maximum Duration labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>MeanDuration' name: org.apache.camel.MeanDuration help: Mean Duration labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>MinDuration' name: org.apache.camel.MinDuration help: Minimum Duration labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>TotalDuration' name: org.apache.camel.TotalDuration help: Total Duration labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>ThreadsBlocked' name: org.apache.camel.ThreadsBlocked help: Threads Blocked labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.camel<context=([^,]+), type=services, name=([^,]+)><>ThreadsInterrupted' name: org.apache.camel.ThreadsInterrupted help: Threads Interrupted labels: context: USD1 service: USD2 type: services type: GAUGE - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>NumLogicalRuntimeFaults' name: org.apache.cxf.NumLogicalRuntimeFaults help: Number of logical runtime faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>NumLogicalRuntimeFaults' name: org.apache.cxf.NumLogicalRuntimeFaults help: Number of logical runtime faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>AvgResponseTime' name: org.apache.cxf.AvgResponseTime help: Average Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>AvgResponseTime' name: org.apache.cxf.AvgResponseTime help: Average Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>NumInvocations' name: org.apache.cxf.NumInvocations help: Number of invocations type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>NumInvocations' name: org.apache.cxf.NumInvocations help: Number of invocations type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>MaxResponseTime' name: org.apache.cxf.MaxResponseTime help: Maximum Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>MaxResponseTime' name: org.apache.cxf.MaxResponseTime help: Maximum Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>MinResponseTime' name: org.apache.cxf.MinResponseTime help: Minimum Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>MinResponseTime' name: org.apache.cxf.MinResponseTime help: Minimum Response Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>TotalHandlingTime' name: org.apache.cxf.TotalHandlingTime help: Total Handling Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>TotalHandlingTime' name: org.apache.cxf.TotalHandlingTime help: Total Handling Time type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>NumRuntimeFaults' name: org.apache.cxf.NumRuntimeFaults help: Number of runtime faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>NumRuntimeFaults' name: org.apache.cxf.NumRuntimeFaults help: Number of runtime faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>NumUnCheckedApplicationFaults' name: org.apache.cxf.NumUnCheckedApplicationFaults help: Number of unchecked application faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>NumUnCheckedApplicationFaults' name: org.apache.cxf.NumUnCheckedApplicationFaults help: Number of unchecked application faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+), operation=([^,]+)><>NumCheckedApplicationFaults' name: org.apache.cxf.NumCheckedApplicationFaults help: Number of checked application faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4 operation: USD5 - pattern: 'org.apache.cxf<bus.id=([^,]+), type=([^,]+), service=([^,]+), port=([^,]+)><>NumCheckedApplicationFaults' name: org.apache.cxf.NumCheckedApplicationFaults help: Number of checked application faults type: GAUGE labels: bus.id: USD1 type: USD2 service: USD3 port: USD4", "import java.io.InputStream; import io.micrometer.core.instrument.Clock; import org.apache.camel.CamelContext; import org.apache.camel.spring.boot.CamelContextConfiguration; import org.springframework.context.annotation.Bean; import org.apache.camel.component.micrometer.MicrometerConstants; import org.apache.camel.component.micrometer.eventnotifier.MicrometerExchangeEventNotifier; import org.apache.camel.component.micrometer.eventnotifier.MicrometerRouteEventNotifier; import org.apache.camel.component.micrometer.messagehistory.MicrometerMessageHistoryFactory; import org.apache.camel.component.micrometer.routepolicy.MicrometerRoutePolicyFactory;", "@SpringBootApplication public class SampleCamelApplication { @Bean(name = {MicrometerConstants.METRICS_REGISTRY_NAME, \"prometheusMeterRegistry\"}) public PrometheusMeterRegistry prometheusMeterRegistry( PrometheusConfig prometheusConfig, CollectorRegistry collectorRegistry, Clock clock) throws MalformedObjectNameException, IOException { InputStream resource = new ClassPathResource(\"config/prometheus_exporter_config.yml\").getInputStream(); new JmxCollector(resource).register(collectorRegistry); new BuildInfoCollector().register(collectorRegistry); return new PrometheusMeterRegistry(prometheusConfig, collectorRegistry, clock); } @Bean public CamelContextConfiguration camelContextConfiguration(@Autowired PrometheusMeterRegistry registry) { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext camelContext) { MicrometerRoutePolicyFactory micrometerRoutePolicyFactory = new MicrometerRoutePolicyFactory(); micrometerRoutePolicyFactory.setMeterRegistry(registry); camelContext.addRoutePolicyFactory(micrometerRoutePolicyFactory); MicrometerMessageHistoryFactory micrometerMessageHistoryFactory = new MicrometerMessageHistoryFactory(); micrometerMessageHistoryFactory.setMeterRegistry(registry); camelContext.setMessageHistoryFactory(micrometerMessageHistoryFactory); MicrometerExchangeEventNotifier micrometerExchangeEventNotifier = new MicrometerExchangeEventNotifier(); micrometerExchangeEventNotifier.setMeterRegistry(registry); camelContext.getManagementStrategy().addEventNotifier(micrometerExchangeEventNotifier); MicrometerRouteEventNotifier micrometerRouteEventNotifier = new MicrometerRouteEventNotifier(); micrometerRouteEventNotifier.setMeterRegistry(registry); camelContext.getManagementStrategy().addEventNotifier(micrometerRouteEventNotifier); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; }", "mvn -Popenshift oc:deploy", "get pods -n myapp NAME READY STATUS RESTARTS AGE camel-example-spring-boot-xml-2-deploy 0/1 Completed 0 13m camel-example-spring-boot-xml-2-x78rk 1/1 Running 0 13m camel-example-spring-boot-xml-s2i-2-build 0/1 Completed 0 14m", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: csb-demo-monitor name: csb-demo-monitor spec: endpoints: - interval: 30s port: http scheme: http path: /actuator/prometheus selector: matchLabels: app: camel-example-spring-boot-xml", "apply -f servicemonitor.yml servicemonitor.monitoring.coreos.com/csb-demo-monitor \"myapp\" created", "get servicemonitor NAME AGE csb-demo-monitor 9m17s" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/getting_started_with_red_hat_build_of_apache_camel_for_spring_boot/monitoring-csb-integrations
4.4. Configuring Multiple Monitors
4.4. Configuring Multiple Monitors 4.4.1. Configuring Multiple Displays for Red Hat Enterprise Linux Virtual Machines A maximum of four displays can be configured for a single Red Hat Enterprise Linux virtual machine when connecting to the virtual machine using the SPICE protocol. Start a SPICE session with the virtual machine. Open the View drop-down menu at the top of the SPICE client window. Open the Display menu. Click the name of a display to enable or disable that display. Note By default, Display 1 is the only display that is enabled on starting a SPICE session with a virtual machine. If no other displays are enabled, disabling this display will close the session. 4.4.2. Configuring Multiple Displays for Windows Virtual Machines A maximum of four displays can be configured for a single Windows virtual machine when connecting to the virtual machine using the SPICE protocol. Click Compute Virtual Machines and select a virtual machine. With the virtual machine in a powered-down state, click Edit . Click the Console tab. Select the number of displays from the Monitors drop-down list. Note This setting controls the maximum number of displays that can be enabled for the virtual machine. While the virtual machine is running, additional displays can be enabled up to this number. Click OK . Start a SPICE session with the virtual machine. Open the View drop-down menu at the top of the SPICE client window. Open the Display menu. Click the name of a display to enable or disable that display. Note By default, Display 1 is the only display that is enabled on starting a SPICE session with a virtual machine. If no other displays are enabled, disabling this display will close the session.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-configuring_multiple_monitors
2.46. RHEA-2011:1442 - new packages: tdb-tools
2.46. RHEA-2011:1442 - new packages: tdb-tools New tdb-tools packages are now available for Red Hat Enterprise Linux 6. The tdb-tools packages contain tools that can be used to backup and manage tdb files created by Samba. This enhancement update adds the tdb-tools packages to Red Hat Enterprise Linux 6. (BZ# 717690 ) All tdb users who wish to backup and manage tdb files are advised to install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/rhea-2011-1442
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_multiple_openshift_data_foundation_storage_clusters/providing-feedback-on-red-hat-documentation_rhodf
Chapter 23. OProfile
Chapter 23. OProfile OProfile is a low overhead, system-wide performance monitoring tool provided by the oprofile package. It uses performance monitoring hardware on the system's processor to retrieve information about the kernel and executables on the system, such as when memory is referenced, the number of second-level cache requests, and the number of hardware interrupts received. OProfile is also able to profile applications that run in a Java Virtual Machine (JVM). The following is a selection of the tools provided by OProfile : ophelp Displays available events for the system's processor along with a brief description of each event. operf The main profiling tool. The operf tool uses the Linux Performance Events subsystem, which allows OProfile to operate alongside other tools using performance monitoring hardware of your system. Unlike the previously used opcontrol tool, no initial setup is required, and it can be used without root privileges unless the --system-wide option is used. ocount A tool for counting the absolute number of event occurrences. It can count events on the whole system, per process, per CPU, or per thread. opimport Converts sample database files from a foreign binary format to the native format for the system. Only use this option when analyzing a sample database from a different architecture. opannotate Creates an annotated source for an executable if the application was compiled with debugging symbols. opreport Reads the recorded performance data and generates a summary as specified by the profile specification. It is possible to generate different reports from the same profile data using different profile specifications. 23.1. Using OProfile operf is the recommended tool for collecting profiling data. The tool does not require any initial configuration, and all options are passed to it on the command line. Unlike the legacy opcontrol tool, operf can run without root privileges. See the Using operf chapter in the System Administrator's Guide for detailed instructions on how to use the operf tool. Example 23.1. Using ocount The following example shows counting the amount of events with ocount during execution of the sleep utility: Note The events are processor implementation specific. It might be necessary to set the option perf_event_paranoid or limit the counts to only user-space events. Example 23.2. Basic operf usage In the following example, the operf tool is used to collect profiling data from the ls -l ~ command. Install debugging information for the ls command: Run the profiling: Analyze the collected data: Example 23.3. Using operf to Profile a Java Program In the following example, the operf tool is used to collect profiling data from a Java (JIT) program, and the opreport tool is then used to output per-symbol data. Install the demonstration Java program used in this example. It is a part of the java-1.8.0-openjdk-demo package, which is included in the Optional channel. See Adding the Optional and Supplementary Repositories for instructions on how to use the Optional channel. When the Optional channel is enabled, install the package: Install the oprofile-jit package for OProfile to be able to collect profiling data from Java programs: Create a directory for OProfile data: Change into the directory with the demonstration program: Start the profiling: Change into the home directory and analyze the collected data: A sample output may look like the following: 23.2. OProfile Documentation For more extensive information on OProfile , see the oprofile (1) manual page. Red Hat Enterprise Linux also provides two comprehensive guides to OProfile in file:///usr/share/doc/oprofile-_version_pass:attributes[{blank}]/: OProfile Manual A comprehensive manual with detailed instructions on the setup and use of OProfile is available at file:///usr/share/doc/oprofile-_version_pass:attributes[{blank}]/oprofile.html OProfile Internals Documentation on the internal workings of OProfile , useful for programmers interested in contributing to the OProfile upstream, can be found at file:///usr/share/doc/oprofile-_version_pass:attributes[{blank}]/internals.html
[ "ocount -e INST_RETIRED -- sleep 1 Events were actively counted for 1.0 seconds. Event counts (actual) for /bin/sleep: Event Count % time counted INST_RETIRED 683,011 100.00", "debuginfo-install -y coreutils", "operf ls -l ~ Profiling done.", "opreport --symbols CPU: Intel Skylake microarchitecture, speed 3.4e+06 MHz (estimated) Counted cpu_clk_unhalted events () with a unit mask of 0x00 (Core cycles when at least one thread on the physical core is not in halt state) count 100000 samples % image name symbol name 161 81.3131 no-vmlinux /no-vmlinux 3 1.5152 libc-2.17.so get_next_seq 3 1.5152 libc-2.17.so strcoll_l 2 1.0101 ld-2.17.so _dl_fixup 2 1.0101 ld-2.17.so _dl_lookup_symbol_x [...]", "yum install java-1.8.0-openjdk-demo", "yum install oprofile-jit", "mkdir ~/oprofile_data", "cd /usr/lib/jvm/java-1.8.0-openjdk/demo/applets/MoleculeViewer/", "operf -d ~/oprofile_data appletviewer -J\"-agentpath:/usr/lib64/oprofile/libjvmti_oprofile.so\" example2.html", "cd", "opreport --symbols --threshold 0.5", "opreport --symbols --threshold 0.5 Using /home/rkratky/oprofile_data/samples/ for samples directory. CPU: Intel Ivy Bridge microarchitecture, speed 3600 MHz (estimated) Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000 samples % image name symbol name 14270 57.1257 libjvm.so /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.51-1.b16.el7_1.x86_64/jre/lib/amd64/server/libjvm.so 3537 14.1593 23719.jo Interpreter 690 2.7622 libc-2.17.so fgetc 581 2.3259 libX11.so.6.3.0 /usr/lib64/libX11.so.6.3.0 364 1.4572 libpthread-2.17.so pthread_getspecific 130 0.5204 libfreetype.so.6.10.0 /usr/lib64/libfreetype.so.6.10.0 128 0.5124 libc-2.17.so __memset_sse2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/oprofile
14.2. Using system-config-lvm
14.2. Using system-config-lvm The LVM utility allows you to manage logical volumes within X windows or graphically. It does not come pre-installed, so to install it first run: You can then access the application by selecting from your menu panel System Administration Logical Volume Management . Alternatively you can start the Logical Volume Management utility by typing system-config-lvm from a terminal. In the example used in this section, the following are the details for the volume group that was created during the installation: Example 14.1. Creating a volume group at installation The logical volumes above were created in disk entity /dev/hda2 while /boot was created in /dev/hda1 . The system also consists of 'Uninitialised Entities' which are illustrated in Example 14.2, "Uninitialized entries" . The figure below illustrates the main window in the LVM utility. The logical and the physical views of the above configuration are illustrated below. The three logical volumes exist on the same physical volume (hda2). Figure 14.3. Main LVM Window The figure below illustrates the physical view for the volume. In this window, you can select and remove a volume from the volume group or migrate extents from the volume to another volume group. Steps to migrate extents are discussed in Figure 14.10, "Migrate Extents" . Figure 14.4. Physical View Window The figure below illustrates the logical view for the selected volume group. The individual logical volume sizes are also illustrated. Figure 14.5. Logical View Window On the left side column, you can select the individual logical volumes in the volume group to view more details about each. In this example the objective is to rename the logical volume name for 'LogVol03' to 'Swap'. To perform this operation select the respective logical volume from the list (as opposed to the image) and click on the Edit Properties button. This will display the Edit Logical Volume window from which you can modify the Logical volume name, size (in extents, gigabytes, megabytes, or kilobytes) and also use the remaining space available in a logical volume group. The figure below illustrates this. This logical volume cannot be changed in size as there is currently no free space in the volume group. If there was remaining space, this option would be enabled (see Figure 14.17, "Edit logical volume" ). Click on the OK button to save your changes (this will remount the volume). To cancel your changes click on the Cancel button. To revert to the last snapshot settings click on the Revert button. A snapshot can be created by clicking on the Create Snapshot button on the LVM utility window. If the selected logical volume is in use by the system, the root directory for example, this task will not be successful as the volume cannot be unmounted. Figure 14.6. Edit Logical Volume 14.2.1. Utilizing Uninitialized Entities 'Uninitialized Entities' consist of unpartitioned space and non LVM file systems. In this example partitions 3, 4, 5, 6 and 7 were created during installation and some unpartitioned space was left on the hard disk. View each partition and ensure that you read the 'Properties for Disk Entity' on the right column of the window to ensure that you do not delete critical data. In this example partition 1 cannot be initialized as it is /boot . Uninitialized entities are illustrated below. Example 14.2. Uninitialized entries In this example, partition 3 will be initialized and added to an existing volume group. To initialize a partition or unpartioned space, select the partition and click on the Initialize Entity button. Once initialized, a volume will be listed in the 'Unallocated Volumes' list.
[ "yum install system-config-lvm", "/boot - (Ext3) file system. Displayed under 'Uninitialized Entities'. (DO NOT initialize this partition). LogVol00 - (LVM) contains the (/) directory (312 extents). LogVol02 - (LVM) contains the (/home) directory (128 extents). LogVol03 - (LVM) swap (28 extents)." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-system-config-lvm
Chapter 65. KafkaAutoRebalanceConfiguration schema reference
Chapter 65. KafkaAutoRebalanceConfiguration schema reference Used in: CruiseControlSpec Property Property type Description mode string (one of [remove-brokers, add-brokers]) Specifies the mode for automatically rebalancing when brokers are added or removed. Supported modes are add-brokers and remove-brokers . template LocalObjectReference Reference to the KafkaRebalance custom resource to be used as the configuration template for the auto-rebalancing on scaling when running for the corresponding mode.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkaautorebalanceconfiguration-reference
Chapter 4. Getting Started with Virtualization Command-line Interface
Chapter 4. Getting Started with Virtualization Command-line Interface The standard method of operating virtualization on Red Hat Enterprise Linux 7 is using the command-line user interface (CLI). Entering CLI commands activates system utilities that create or interact with virtual machines on the host system. This method offers more detailed control than using graphical applications such as virt-manager and provides opportunities for scripting and automation. 4.1. Primary Command-line Utilities for Virtualization The following subsections list the main command-line utilities you can use to set up and manage virtualization on Red Hat Enterprise Linux 7. These commands, as well as numerous other virtualization utilities, are included in packages provided by the Red Hat Enterprise Linux repositories and can be installed using the Yum package manager . For more information about installing virtualization packages, see the Virtualization Deployment and Administration Guide . 4.1.1. virsh virsh is a CLI utility for managing hypervisors and guest virtual machines. It is the primary means of controlling virtualization on Red Hat Enterprise Linux 7. Its capabilities include: Creating, configuring, pausing, listing, and shutting down virtual machines Managing virtual networks Loading virtual machine disk images The virsh utility is ideal for creating virtualization administration scripts. Users without root privileges can use virsh as well, but in read-only mode. Using virsh The virsh utility can be used in a standard command-line input, but also as an interactive shell. In shell mode, the virsh command prefix is not needed, and the user is always registered as root. The following example uses the virsh hostname command to display the hypervisor's host name - first in standard mode, then in interactive mode. Important When using virsh as a non-root user, you enter an unprivileged libvirt session , which means you cannot see or interact with guests or any other virtualized elements created by the root. To gain read-only access to the elements, use virsh with the -c qemu:///system option. Getting help with virsh Like with all Linux bash commands, you can obtain help with virsh by using the man virsh command or the --help option. In addition, the virsh help command can be used to view the help text of a specific virsh command, or, by using a keyword, to list all virsh commands that belong to a certain group. The virsh command groups and their respective keywords are as follows: Guest management - keyword domain Guest monitoring - keyword monitor Host and hypervisor monitoring and management- keyword host Host system network interface management - keyword interface Virtual network management - keyword network Network filter management - keyword filter Node device management - keyword nodedev Management of secrets, such as passphrases or encryption keys - keyword secret Snapshot management - keyword snapshot Storage pool management - keyword pool Storage volume management - keyword volume General virsh usage - keyword virsh In the following example, you need to learn how to rename a guest virtual machine. By using virsh help , you first find the proper command to use and then learn its syntax. Finally, you use the command to rename a guest called Fontaine to Atlas . Example 4.1. How to list help for all commands with a keyword Note For more information about managing virtual machines using virsh , see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . 4.1.2. virt-install virt-install is a CLI utility for creating new virtual machines. It supports both text-based and graphical installations, using serial console, SPICE, or VNC client-server pair graphics. Installation media can be local, or exist remotely on an NFS, HTTP, or FTP server. The tool can also be configured to run unattended and use the kickstart method to prepare the guest, allowing for easy automation of installation. This tool is included in the virt-install package. Important When using virt-install as a non-root user, you enter an unprivileged libvirt session . This means that the created guest will only be visible to you, and it will not have access to certain capabilities that guests created by the root have. Note For more information about using virt-install , see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . 4.1.3. virt-xml virt-xml is a command-line utility for editing domain XML files. For the XML configuration to be modified successfully, the name of the guest, the XML action, and the change to make must be included with the command. For example, the following lists the suboptions that relate to guest boot configuration, and then turns on the boot menu on the example_domain guest: Note that each invocation of the command can perform one action on one domain XML file. Note This tool is included in the virt-install package. For more information about using virt-xml , see the virt-xml man pages. 4.1.4. guestfish guestfish is a command-line utility for examining and modifying virtual machine disk images. It uses the libguestfs library and exposes all functionalities provided by the libguestfs API. Using guestfish The guestfish utility can be used in a standard command-line input mode, but also as an interactive shell. In shell mode, the guestfish command prefix is not needed, and the user is always registered as root. The following example uses the guestfish to display the file systems on the testguest virtual machine - first in standard mode, then in interactive mode. In addition, guestfish can be used in bash scripts for automation purposes. Important When using guestfish as a non-root user, you enter an unprivileged libvirt session . This means you cannot see or interact with disk images on guests created by the root. To gain read-only access to these disk images, use guestfish with the -ro -c qemu:///system options. In addition, you must have read privileges for the disk image files. Getting help with guestfish Like with all Linux bash commands, you can obtain help with guestfish by using the man guestfish command or the --help option. In addition, the guestfish help command can be used to view detailed information about a specific guestfish command. The following example displays information about the guestfish add command: Note For more information about guestfish , see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide .
[ "virsh hostname localhost.localdomain USD virsh Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # hostname localhost.localdomain", "virsh help domain Domain Management (help keyword 'domain'): attach-device attach device from an XML file attach-disk attach disk device [...] domname convert a domain id or UUID to domain name domrename rename a domain [...] virsh help domrename NAME domrename - rename a domain SYNOPSIS domrename <domain> <new-name> DESCRIPTION Rename an inactive domain. OPTIONS [--domain] <string> domain name, id or uuid [--new-name] <string> new domain name virsh domrename --domain Fontaine --new-name Atlas Domain successfully renamed", "virt-xml boot=? --boot options: arch cdrom [...] menu network nvram nvram_template os_type smbios_mode uefi useserial virt-xml example_domain --edit --boot menu=on Domain 'example_domain' defined successfully.", "guestfish domain testguest : run : list-filesystems /dev/sda1: xfs /dev/rhel/root: xfs /dev/rhel/swap: swap guestfish Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs> domain testguest ><fs> run ><fs> list-filesystems /dev/sda1: xfs /dev/rhel/root: xfs /dev/rhel/swap: swap", "guestfish help add NAME add-drive - add an image to examine or modify SYNOPSIS add-drive filename [readonly:true|false] [format:..] [iface:..] [name:..] [label:..] [protocol:..] [server:..] [username:..] [secret:..] [cachemode:..] [discard:..] [copyonread:true|false] DESCRIPTION This function adds a disk image called filename to the handle. filename may be a regular host file or a host device. [...]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/chap-CLI-Intro
Managing users
Managing users Red Hat OpenShift AI Cloud Service 1 Manage user permissions in Red Hat OpenShift AI
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_users/index
8.237. systemtap
8.237. systemtap 8.237.1. RHBA-2014:1449 - systemtap bug fix and enhancement update Updated systemtap packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. SystemTap is a tracing and probing tool to analyze and monitor activities of the operating system, including the kernel. It provides a wide range of filtering and analysis options. This update also fixes the following bugs: Note The systemtap packages have been upgraded to upstream version 2.5-5, which provides a number of bug fixes and enhancements over the version. (BZ# 1038692 , BZ# 1074541 ) This update also fixes the following bugs: Bug Fixes BZ# 1020437 Previously, the kernel added some trace points in a way which failed to expose them to systemtap's search mechanisms. This update extends systemtap's search mechanism to include these extra trace points, thus allowing them to be probed from a systemtap script. BZ# 1027459 The SystemTap runtime could attempt to open a file named "trace1" in the current directory rather than the one that was created in the /sys/kernel/debug/systemtap/ directory. As a consequence, SystemTap terminated unexpectedly if such a file existed in the current directory. This update applies a patch to fix this bug and SystemTap now works as expected in the described scenario. BZ# 1109084 Previously, it was not entirely clear that names of scripts configured with the SystemTap init script service could contain only alphanumeric characters and the underscore character ("_"). Also, the first character in a name cannot be a number. This update adds this information to the systemtap(8) manual page to prevent any confusion. The systemtap packages have been upgraded to upstream version 2.5-5, which provides a number of bug fixes and enhancements over the version. (BZ#1038692, BZ#1074541) Users of systemtap are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/systemtap
Chapter 11. Preparing a RHEL installation on 64-bit IBM Z
Chapter 11. Preparing a RHEL installation on 64-bit IBM Z This section describes how to install Red Hat Enterprise Linux on the 64-bit IBM Z architecture. 11.1. Planning for installation on 64-bit IBM Z Red Hat Enterprise Linux 8 runs on z13 or later IBM mainframe systems. The installation process assumes that you are familiar with the 64-bit IBM Z and can set up logical partitions (LPARs) and z/VM guest virtual machines. For installation of Red Hat Enterprise Linux on 64-bit IBM Z, Red Hat supports Direct Access Storage Device (DASD), Fiber Channel Protocol (FCP) storage devices, and virtio-blk and virtio-scsi devices. When using FCP devices, Red Hat recommends using them in multipath configuration for better reliability. Important DASDs are disks that allow a maximum of three partitions per device. For example, dasda can have partitions dasda1 , dasda2 , and dasda3 . Pre-installation decisions Whether the operating system is to be run on an LPAR, KVM, or as a z/VM guest operating system. Network configuration. Red Hat Enterprise Linux 8 for 64-bit IBM Z supports the following network devices: Real and virtual Open Systems Adapter (OSA) Real and virtual HiperSockets LAN channel station (LCS) for real OSA virtio-net devices RDMA over Converged Ethernet (RoCE) Ensure you select machine type as ESA for your z/VM VMs, because selecting any other machine types might prevent RHEL from installing. See the IBM documentation . Note When initializing swap space on a Fixed Block Architecture (FBA) DASD using the SWAPGEN utility, the FBAPART option must be used. Additional resources For additional information about system requirements, see RHEL Technology Capabilities and Limits For additional information about 64-bit IBM Z, see IBM documentation . For additional information about using secure boot with Linux on IBM Z, see Secure boot for Linux on IBM Z . For installation instructions on IBM Power Servers, refer to IBM installation documentation . To see if your system is supported for installing RHEL, refer to https://catalog.redhat.com . 11.2. Boot media compatibility for IBM Z servers The following table provides detailed information about the supported boot media options for installing Red Hat Enterprise Linux (RHEL) on 64-bit IBM Z servers. It outlines the compatibility of each boot medium with different system types and indicates whether the zipl boot loader is used. This information helps you determine the most suitable boot medium for your specific environment. System type / Boot media Uses zipl boot loader z/VM KVM LPAR z/VM Reader No Yes N/A N/A SE or HMC (remote SFTP, FTPS, FTP server, DVD) No N/A N/A Yes DASD Yes Yes Yes Yes FCP SCSI LUNs Yes Yes Yes Yes FCP SCSI DVD Yes Yes Yes Yes N/A indicates that the boot medium is not applicable for the specified system type. 11.3. Supported environments and components for IBM Z servers The following tables provide information about the supported environments, network devices, machine types, and storage types for different system types when installing Red Hat Enterprise Linux (RHEL) on 64-bit IBM Z servers. Use these tables to identify the compatibility of various components with your specific system configuration. Table 11.1. Network device compatibility for system types Network device z/VM KVM LPAR Open Systems Adapter (OSA) Yes N/A Yes HiperSockets Yes N/A Yes LAN channel station (LCS) Yes N/A Yes virtio-net N/A Yes N/A RDMA over Converged Ethernet (RoCE) Yes Yes Yes N/A indicates that the component is not applicable for the specified system type. Table 11.2. Machine type compatibility for system types Machine type z/VM KVM LPAR ESA Yes N/A N/A s390-virtio-ccw N/A Yes N/A N/A indicates that the component is not applicable for the specified system type. Table 11.3. Storage type compatibility for system types Storage type z/VM KVM LPAR DASD Yes Yes Yes FCP SCSI Yes Yes [a] Yes virtio-blk N/A Yes N/A [a] Conditional support based on configuration N/A indicates that the component is not applicable for the specified system type. 11.4. Overview of installation process on 64-bit IBM Z servers You can install Red Hat Enterprise Linux on 64-bit IBM Z interactively or in unattended mode. Installation on 64-bit IBM Z differs from other architectures as it is typically performed over a network, and not from local media. The installation consists of three phases: Booting the installation Connect to the mainframe Customize the boot parameters Perform an initial program load (IPL), or boot from the media containing the installation program Connecting to the installation system From a local machine, connect to the remote 64-bit IBM Z system using SSH, and start the installation program using Virtual Network Computing (VNC) Completing the installation using the RHEL installation program 11.5. Boot media for installing RHEL on 64-bit IBM Z servers After establishing a connection with the mainframe, you need to perform an initial program load (IPL), or boot, from the medium containing the installation program. This document describes the most common methods of installing Red Hat Enterprise Linux on 64-bit IBM Z. In general, any method may be used to boot the Linux installation system, which consists of a kernel ( kernel.img ) and initial RAM disk ( initrd.img ) with parameters in the generic.prm file supplemented by user defined parameters. Additionally, a generic.ins file is loaded which determines file names and memory addresses for the initrd, kernel and generic.prm . The Linux installation system is also called the installation program in this book. You can use the following boot media only if Linux is to run as a guest operating system under z/VM: z/VM reader You can use the following boot media only if Linux is to run in LPAR mode: SE or HMC through a remote SFTP, FTPS or FTP server SE or HMC DVD You can use the following boot media for both z/VM and LPAR: DASD SCSI disk device that is attached through an FCP channel FCP-attached SCSI DVD If you use DASD or an FCP-attached SCSI disk device as boot media, you must have a configured zipl boot loader. 11.6. Customizing boot parameters Before the installation can begin, you must configure some mandatory boot parameters. When installing through z/VM, these parameters must be configured before you boot into the generic.prm file. When installing on an LPAR, the rd.cmdline parameter is set to ask by default, meaning that you will be given a prompt on which you can enter these boot parameters. In both cases, the required parameters are the same. All network configuration can either be specified by using a parameter file, or at the prompt. Installation source An installation source must always be configured. Use the inst.repo option to specify the package source for the installation. Network devices Network configuration must be provided if network access will be required during the installation. If you plan to perform an unattended (Kickstart-based) installation by using only local media such as a disk, network configuration can be omitted. ip= Use the ip= option for basic network configuration, and other options as required for RoCE configuration. rd.znet= Also use the rd.znet= kernel option, which takes a network protocol type, a comma delimited list of sub-channels, and, optionally, comma delimited sysfs parameter and value pairs for qeth devices. This parameter can be specified multiple times to activate multiple network devices. For example: When specifying multiple rd.znet boot options, only the last one is passed on to the kernel command line of the installed system. This does not affect the networking of the system since all network devices configured during installation are properly activated and configured at boot. The qeth device driver assigns the same interface name for Ethernet and Hipersockets devices: enc <device number> . The bus ID is composed of the channel subsystem ID, subchannel set ID, and device number, separated by dots; the device number is the last part of the bus ID, without leading zeroes and dots. For example, the interface name will be enca00 for a device with the bus ID 0.0.0a00 . net.naming-scheme= The udev service renames network devices to assign, preferably, consistent names. By using the net.naming-scheme= parameter, you can influence the naming scheme. For details, see Implementing consistent network interface naming . Note If you use Remote direct memory access (RDMA) over Converged Ethernet (RoCE) devices, it depends on multiple factors whether Red Hat Enterprise Linux (RHEL) assigns a predictable or unpredictable name. However, during the installation, RHEL always assigns an unpredictable name to RoCE devices that are enumerated by a function identifier (FID), but you can configure RHEL after the installation to assign a predictable name to these RoCE devices. For details about the factors that influence the naming of RoCE devices and how to configure a consistent name after the installation for RoCE devices enumerated by FID, see Determining a predictable RoCE device name on the IBM Z platform . Storage devices At least one storage device must always be configured for text mode installations. The rd.dasd= option takes a Direct Access Storage Device (DASD) adapter device bus identifier. For multiple DASDs, specify the parameter multiple times, or use a comma separated list of bus IDs. To specify a range of DASDs, specify the first and the last bus ID. For example: The rd.zfcp= option takes a SCSI over FCP (zFCP) adapter device bus identifier, a world wide port name (WWPN), and a FCP LUN, then activates the device. This parameter can be specified multiple times to activate multiple zFCP devices. For example: Since 8, a target world wide port name (WWPN) and an FCP LUN have to be provided only if the zFCP device is not configured in NPIV mode or when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter. It provides access to all SCSI devices found in the storage area network attached to the FCP device with the specified bus ID. This parameter needs to be specified at least twice to activate multiple paths to the same disks. Kickstart options If you are using a Kickstart file to perform an automatic installation, you must always specify the location of the Kickstart file using the inst.ks= option. For an unattended, fully automatic Kickstart installation, the inst.cmdline option is also useful. An example customized generic.prm file containing all mandatory parameters look similar to the following example: Example 11.1. Customized generic.prm file Some installation methods also require a file with a mapping of the location of installation data in the file system of the DVD or FTP server and the memory locations where the data is to be copied. The file is typically named generic.ins , and contains file names for the initial RAM disk, kernel image, and parameter file ( generic.prm ) and a memory location for each file. An example generic.ins will look similar to the following example: Example 11.2. Sample generic.ins file A valid generic.ins file is provided by Red Hat along with all other files required to boot the installer. Modify this file only if you want to, for example, load a different kernel version than default. Additional resources For a list of all boot options to customize the installation program's behavior, see Boot options reference . 11.7. Parameters and configuration files on 64-bit IBM Z This section contains information about the parameters and configuration files on 64-bit IBM Z. 11.7.1. Required configuration file parameters on 64-bit IBM Z Several parameters are required and must be included in the parameter file. These parameters are also provided in the file generic.prm in directory images/ of the installation DVD. ro Mounts the root file system, which is a RAM disk, read-only. ramdisk_size= size Modifies the memory size reserved for the RAM disk to ensure that the Red Hat Enterprise Linux installation program fits within it. For example: ramdisk_size=40000 . The generic.prm file also contains the additional parameter cio_ignore=all,!condev . This setting speeds up boot and device detection on systems with many devices. The installation program transparently handles the activation of ignored devices. 11.7.2. 64-bit IBM z/VM configuration file Under z/VM, you can use a configuration file on a CMS-formatted disk. The purpose of the CMS configuration file is to save space in the parameter file by moving the parameters that configure the initial network setup, the DASD, and the FCP specification out of the parameter file. Each line of the CMS configuration file contains a single variable and its associated value, in the following shell-style syntax: variable = value . You must also add the CMSDASD and CMSCONFFILE parameters to the parameter file. These parameters point the installation program to the configuration file: CMSDASD= cmsdasd_address Where cmsdasd_address is the device number of a CMS-formatted disk that contains the configuration file. This is usually the CMS user's A disk. For example: CMSDASD=191 CMSCONFFILE= configuration_file Where configuration_file is the name of the configuration file. This value must be specified in lower case. It is specified in a Linux file name format: CMS_file_name . CMS_file_type . The CMS file REDHAT CONF is specified as redhat.conf . The CMS file name and the file type can each be from one to eight characters that follow the CMS conventions. For example: CMSCONFFILE=redhat.conf 11.7.3. Installation network, DASD and FCP parameters on 64-bit IBM Z These parameters can be used to automatically set up the preliminary network, and can be defined in the CMS configuration file. These parameters are the only parameters that can also be used in a CMS configuration file. All other parameters in other sections must be specified in the parameter file. NETTYPE=" type " Where type must be one of the following: qeth , lcs , or ctc . The default is qeth . Choose qeth for: OSA-Express features HiperSockets Virtual connections on z/VM, including VSWITCH and Guest LAN Select ctc for: Channel-to-channel network connections SUBCHANNELS=" device_bus_IDs " Where device_bus_IDs is a comma-separated list of two or three device bus IDs. The IDs must be specified in lowercase. Provides required device bus IDs for the various network interfaces: For example (a sample qeth SUBCHANNEL statement): PORTNO=" portnumber " You can add either PORTNO="0" (to use port 0) or PORTNO="1" (to use port 1 of OSA features with two ports per CHPID). LAYER2=" value " Where value can be 0 or 1 . Use LAYER2="0" to operate an OSA or HiperSockets device in layer 3 mode ( NETTYPE="qeth" ). Use LAYER2="1" for layer 2 mode. For virtual network devices under z/VM this setting must match the definition of the GuestLAN or VSWITCH to which the device is coupled. To use network services that operate on layer 2 (the Data Link Layer or its MAC sublayer) such as DHCP, layer 2 mode is a good choice. The qeth device driver default for OSA devices is now layer 2 mode. To continue using the default of layer 3 mode, set LAYER2="0" explicitly. VSWITCH=" value " Where value can be 0 or 1 . Specify VSWITCH="1" when connecting to a z/VM VSWITCH or GuestLAN, or VSWITCH="0" (or nothing at all) when using directly attached real OSA or directly attached real HiperSockets. MACADDR=" MAC_address " If you specify LAYER2="1" and VSWITCH="0" , you can optionally use this parameter to specify a MAC address. Linux requires six colon-separated octets as pairs lower case hex digits - for example, MACADDR=62:a3:18:e7:bc:5f . This is different from the notation used by z/VM. If you specify LAYER2="1" and VSWITCH="1" , you must not specify the MACADDR , because z/VM assigns a unique MAC address to virtual network devices in layer 2 mode. CTCPROT=" value " Where value can be 0 , 1 , or 3 . Specifies the CTC protocol for NETTYPE="ctc" . The default is 0 . HOSTNAME=" string " Where string is the host name of the newly-installed Linux instance. IPADDR=" IP " Where IP is the IP address of the new Linux instance. NETMASK=" netmask " Where netmask is the netmask. The netmask supports the syntax of a prefix integer (from 1 to 32) as specified in IPv4 classless interdomain routing (CIDR). For example, you can specify 24 instead of 255.255.255.0 , or 20 instead of 255.255.240.0 . GATEWAY=" gw " Where gw is the gateway IP address for this network device. MTU=" mtu " Where mtu is the Maximum Transmission Unit (MTU) for this network device. DNS=" server1:server2:additional_server_terms:serverN " Where " server1:server2:additional_server_terms:serverN " is a list of DNS servers, separated by colons. For example: SEARCHDNS=" domain1:domain2:additional_dns_terms:domainN " Where " domain1:domain2:additional_dns_terms:domainN " is a list of the search domains, separated by colons. For example: You only need to specify SEARCHDNS= if you specify the DNS= parameter. DASD= Defines the DASD or range of DASDs to configure for the installation. The installation program supports a comma-separated list of device bus IDs, or ranges of device bus IDs with the optional attributes ro , diag , erplog , and failfast . Optionally, you can abbreviate device bus IDs to device numbers with leading zeros stripped. Any optional attributes should be separated by colons and enclosed in parentheses. Optional attributes follow a device bus ID or a range of device bus IDs. The only supported global option is autodetect . This does not support the specification of non-existent DASDs to reserve kernel device names for later addition of DASDs. Use persistent DASD device names such as /dev/disk/by-path/name to enable transparent addition of disks later. Other global options such as probeonly , nopav , or nofcx are not supported by the installation program. Only specify those DASDs that need to be installed on your system. All unformatted DASDs specified here must be formatted after a confirmation later on in the installation program. Add any data DASDs that are not needed for the root file system or the /boot partition after installation. For example: FCP_ n =" device_bus_ID [ WWPN FCP_LUN ]" For FCP-only environments, remove the DASD= option from the CMS configuration file to indicate no DASD is present. Where: n is typically an integer value (for example FCP_1 or FCP_2 ) but could be any string with alphabetic or numeric characters or underscores. device_bus_ID specifies the device bus ID of the FCP device representing the host bus adapter (HBA) (for example 0.0.fc00 for device fc00). WWPN is the world wide port name used for routing (often in conjunction with multipathing) and is as a 16-digit hex value (for example 0x50050763050b073d ). FCP_LUN refers to the storage logical unit identifier and is specified as a 16-digit hexadecimal value padded with zeroes to the right (for example 0x4020400100000000 ). Note A target world wide port name (WWPN) and an FCP_LUN have to be provided if the zFCP device is not configured in NPIV mode, when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter or when installing RHEL-8.6 or older releases. Otherwise only the device_bus_ID value is mandatory. These variables can be used on systems with FCP devices to activate FCP LUNs such as SCSI disks. Additional FCP LUNs can be activated during the installation interactively or by means of a Kickstart file. An example value looks similar to the following: Each of the values used in the FCP parameters (for example FCP_1 or FCP_2 ) are site-specific and are normally supplied by the FCP storage administrator. 11.7.4. Miscellaneous parameters on 64-bit IBM Z The following parameters can be defined in a parameter file but do not work in a CMS configuration file. rd.live.check Turns on testing of an ISO-based installation source; for example, when using inst.repo= with an ISO on local disk or mounted with NFS. inst.nompath Disables support for multipath devices. proxy=[ protocol ://][ username [: password ]@] host [: port ] Specify a proxy to use with installation over HTTP, HTTPS or FTP. inst.rescue Boot into a rescue system running from a RAM disk that can be used to fix and restore an installed system. inst.stage2= URL Specifies a path to a tree containing install.img , not to the install.img directly. Otherwise, follows the same syntax as inst.repo= . If inst.stage2 is specified, it typically takes precedence over other methods of finding install.img . However, if Anaconda finds install.img on local media, the inst.stage2 URL will be ignored. If inst.stage2 is not specified and install.img cannot be found locally, Anaconda looks to the location given by inst.repo= or method= . If only inst.stage2= is given without inst.repo= or method= , Anaconda uses whatever repos the installed system would have enabled by default for installation. Use the option multiple times to specify multiple HTTP, HTTPS or FTP sources. The HTTP, HTTPS or FTP paths are then tried sequentially until one succeeds: inst.syslog= IP/hostname [: port ] Sends log messages to a remote syslog server. The boot parameters described here are the most useful for installations and trouble shooting on 64-bit IBM Z, but only a subset of those that influence the installation program. 11.7.5. Sample parameter file and CMS configuration file on 64-bit IBM Z To change the parameter file, begin by extending the shipped generic.prm file. Example of generic.prm file: Example of redhat.conf file configuring a QETH network device (pointed to by CMSCONFFILE in generic.prm ): 11.7.6. Using parameter and configuration files on 64-bit IBM Z The 64-bit IBM Z architecture can use a customized parameter file to pass boot parameters to the kernel and the installation program. You need to change the parameter file if you want to: Install unattended with Kickstart. Choose non-default installation settings that are not accessible through the installation program's interactive user interface, such as rescue mode. The parameter file can be used to set up networking non-interactively before the installation program ( Anaconda ) starts. The kernel parameter file is limited to 3754 bytes plus an end-of-line character. The parameter file can be variable or fixed record format. Fixed record format increases the file size by padding each line up to the record length. Should you encounter problems with the installation program not recognizing all specified parameters in LPAR environments, you can try to put all parameters in one single line or start and end each line with a space character. The parameter file contains kernel parameters, such as ro , and parameters for the installation process, such as vncpassword=test or vnc . 11.8. Preparing an installation in a z/VM guest virtual machine Use the x3270 or c3270 terminal emulator, to log in to z/VM from other Linux systems, or use the IBM 3270 terminal emulator on the 64-bit IBM Z Hardware Management Console (HMC). If you are running Microsoft Windows operating system, there are several options available, and can be found through an internet search. A free native Windows port of c3270 called wc3270 also exists. Ensure you select machine type as ESA for your z/VM VMs, because selecting any other machine types might prevent installing RHEL. See the IBM documentation . Procedure Log on to the z/VM guest virtual machine chosen for the Linux installation. optional: If your 3270 connection is interrupted and you cannot log in again because the session is still active, you can replace the old session with a new one by entering the following command on the z/VM logon screen: + Replace user with the name of the z/VM guest virtual machine. Depending on whether an external security manager, for example RACF, is used, the logon command might vary. If you are not already running CMS (single-user operating system shipped with z/VM) in your guest, boot it now by entering the command: Be sure not to use CMS disks such as your A disk (often device number 0191) as installation targets. To find out which disks are in use by CMS, use the following query: You can use the following CP (z/VM Control Program, which is the z/VM hypervisor) query commands to find out about the device configuration of your z/VM guest virtual machine: Query the available main memory, which is called storage in 64-bit IBM Z terminology. Your guest should have at least 1 GiB of main memory. Query available network devices by type: osa OSA - CHPID type OSD, real or virtual (VSWITCH or GuestLAN), both in QDIO mode hsi HiperSockets - CHPID type IQD, real or virtual (GuestLAN type Hipers) lcs LCS - CHPID type OSE For example, to query all of the network device types mentioned above, run: Query available DASDs. Only those that are flagged RW for read-write mode can be used as installation targets: Query available FCP devices (vHBAs):
[ "rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno= <number>", "rd.dasd=0.0.0200 rd.dasd=0.0.0202(ro),0.0.0203(ro:failfast),0.0.0205-0.0.0207", "rd.zfcp=0.0.4000,0x5005076300C213e9,0x5022000000000000 rd.zfcp=0.0.4000", "ro ramdisk_size=40000 cio_ignore=all,!condev inst.repo=http://example.com/path/to/repository rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno=0,portname=foo ip=192.168.17.115::192.168.17.254:24:foobar.systemz.example.com:enc600:none nameserver=192.168.17.1 rd.dasd=0.0.0200 rd.dasd=0.0.0202 rd.zfcp=0.0.4000,0x5005076300c213e9,0x5022000000000000 rd.zfcp=0.0.5000,0x5005076300dab3e9,0x5022000000000000 inst.ks=http://example.com/path/to/kickstart", "images/kernel.img 0x00000000 images/initrd.img 0x02000000 images/genericdvd.prm 0x00010480 images/initrd.addrsize 0x00010408", "qeth: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id , data_device_bus_id \" lcs or ctc: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id \"", "SUBCHANNELS=\"0.0.f5f0,0.0.f5f1,0.0.f5f2\"", "DNS=\"10.1.2.3:10.3.2.1\"", "SEARCHDNS=\"subdomain.domain:domain\"", "DASD=\"eb1c,0.0.a000-0.0.a003,eb10-eb14(diag),0.0.ab1c(ro:diag)\"", "FCP_ n =\" device_bus_ID [ WWPN FCP_LUN ]\"", "FCP_1=\"0.0.fc00 0x50050763050b073d 0x4020400100000000\" FCP_2=\"0.0.4000\"", "inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/", "ro ramdisk_size=40000 cio_ignore=all,!condev CMSDASD=\"191\" CMSCONFFILE=\"redhat.conf\" inst.vnc inst.repo=http://example.com/path/to/dvd-contents", "NETTYPE=\"qeth\" SUBCHANNELS=\"0.0.0600,0.0.0601,0.0.0602\" PORTNAME=\"FOOBAR\" PORTNO=\"0\" LAYER2=\"1\" MACADDR=\"02:00:be:3a:01:f3\" HOSTNAME=\"foobar.systemz.example.com\" IPADDR=\"192.168.17.115\" NETMASK=\"255.255.255.0\" GATEWAY=\"192.168.17.254\" DNS=\"192.168.17.1\" SEARCHDNS=\"systemz.example.com:example.com\" DASD=\"200-203\"", "logon user here", "cp ipl cms", "query disk", "cp query virtual storage", "cp query virtual osa", "cp query virtual dasd", "cp query virtual fcp" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/preparing-a-rhel-installation-on-64-bit-ibm-z_rhel-installer
Chapter 3. Managing user accounts using the IdM Web UI
Chapter 3. Managing user accounts using the IdM Web UI Identity Management (IdM) provides several stages that can help you to manage various user life cycle situations: Creating a user account Creating a stage user account before an employee starts their career in your company and be prepared in advance for the day when the employee appears in the office and want to activate the account. You can omit this step and create the active user account directly. The procedure is similar to creating a stage user account. Activating a user account Activating the account the first working day of the employee. Disabling a user account If the user go to a parental leave for couple of months, you will need to disable the account temporarily . Enabling a user account When the user returns, you will need to re-enable the account . Preserving a user account If the user wants to leave the company, you will need to delete the account with a possibility to restore it because people can return to the company after some time. Restoring a user account Two years later, the user is back and you need to restore the preserved account . Deleting a user account If the employee is dismissed, delete the account without a backup. 3.1. User life cycle Identity Management (IdM) supports three user account states: Stage users are not allowed to authenticate. This is an initial state. Some of the user account properties required for active users cannot be set, for example, group membership. Active users are allowed to authenticate. All required user account properties must be set in this state. Preserved users are former active users that are considered inactive and cannot authenticate to IdM. Preserved users retain most of the account properties they had as active users, but they are not part of any user groups. You can delete user entries permanently from the IdM database. Important Deleted user accounts cannot be restored. When you delete a user account, all the information associated with the account is permanently lost. A new administrator can only be created by a user with administrator rights, such as the default admin user. If you accidentally delete all administrator accounts, the Directory Manager must create a new administrator manually in the Directory Server. Warning Do not delete the admin user. As admin is a pre-defined user required by IdM, this operation causes problems with certain commands. If you want to define and use an alternative admin user, disable the pre-defined admin user with ipa user-disable admin after you granted admin permissions to at least one different user. Warning Do not add local users to IdM. The Name Service Switch (NSS) always resolves IdM users and groups before resolving local users and groups. This means that, for example, IdM group membership does not work for local users. 3.2. Adding users in the Web UI Usually, you need to create a new user account before a new employee starts to work. Such a stage account is not accessible and you need to activate it later. Note Alternatively, you can create an active user account directly. For adding active user, follow the procedure below and add the user account in the Active users tab. Prerequisites Administrator privileges for managing IdM or User Administrator role. Procedure Log in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Go to Users Stage Users tab. Alternatively, you can add the user account in the Users Active users , however, you cannot add user groups to the account. Click the + Add icon. In the Add stage user dialog box, enter First name and Last name of the new user. Optional: In the User login field, add a login name. If you leave it empty, the IdM server creates the login name in the following pattern: The first letter of the first name and the surname. The whole login name can have up to 32 characters. Optional: In the GID drop down menu, select groups in which the user should be included. Optional: In the Password and Verify password fields, enter your password and confirm it, ensuring they both match. Click on the Add button. At this point, you can see the user account in the Stage Users table. Note If you click on the user name, you can edit advanced settings, such as adding a phone number, address, or occupation. Warning IdM automatically assigns a unique user ID (UID) to new user accounts. You can assign a UID manually, or even modify an already existing UID. However, the server does not validate whether the new UID number is unique. Consequently, multiple user entries might have the same UID number assigned. A similar problem can occur with user private group IDs (GIDs) if you assign GIDs to user accounts manually. You can use the ipa user-find --uid=<uid> or ipa user-find --gidnumber=<gidnumber> commands on the IdM CLI to check if you have multiple user entries with the same ID. Red Hat recommends you do not have multiple entries with the same UIDs or GIDs. If you have objects with duplicate IDs, security identifiers (SIDs) are not generated correctly. SIDs are crucial for trusts between IdM and Active Directory and for Kerberos authentication to work correctly. Additional resources Strengthening Kerberos security with PAC information Are user/group collisions supported in Red Hat Enterprise Linux? (Red Hat Knowledgebase) Users without SIDs cannot log in to IdM after an upgrade 3.3. Activating stage users in the IdM Web UI You must follow this procedure to activate a stage user account, before the user can log in to IdM and before the user can be added to an IdM group. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. At least one staged user account in IdM. Procedure Log in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Go to Users Stage users tab. Click the check-box of the user account you want to activate. Click on the Activate button. On the Confirmation dialog box, click OK . If the activation is successful, the IdM Web UI displays a green confirmation that the user has been activated and the user account has been moved to Active users . The account is active and the user can authenticate to the IdM domain and IdM Web UI. The user is prompted to change their password on the first login. Note At this stage, you can add the active user account to user groups. 3.4. Disabling user accounts in the Web UI You can disable active user accounts. Disabling a user account deactivates the account, therefore, user accounts cannot be used to authenticate and using IdM services, such as Kerberos, or perform any tasks. Disabled user accounts still exist within IdM and all of the associated information remains unchanged. Unlike preserved user accounts, disabled user accounts remain in the active state and can be a member of user groups. Note After disabling a user account, any existing connections remain valid until the user's Kerberos TGT and other tickets expire. After the ticket expires, the user will not be able to renew it. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Go to Users Active users tab. Click the check-box of the user accounts you want to disable. Click on the Disable button. In the Confirmation dialog box, click on the OK button. If the disabling procedure has been successful, you can verify in the Status column in the Active users table. 3.5. Enabling user accounts in the Web UI With IdM you can enable disabled active user accounts. Enabling a user account activates the disabled account. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log in to the IdM Web UI. Go to Users Active users tab. Click the check-box of the user accounts you want to enable. Click on the Enable button. In the Confirmation dialog box, click on the OK button. If the change has been successful, you can verify in the Status column in the Active users table. 3.6. Preserving active users in the IdM Web UI Preserving user accounts enables you to remove accounts from the Active users tab, yet keeping these accounts in IdM. Preserve the user account if the employee leaves the company. If you want to disable user accounts for a couple of weeks or months (parental leave, for example), disable the account. For details, see Disabling user accounts in the Web UI . The preserved accounts are not active and users cannot use them to access your internal network, however, the account stays in the database with all the data. You can move the restored accounts back to the active mode. Note The list of users in the preserved state can provide a history of past user accounts. Prerequisites Administrator privileges for managing the IdM (Identity Management) Web UI or User Administrator role. Procedure Log in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Go to Users Active users tab. Click the check-box of the user accounts you want to preserve. Click on the Delete button. In the Remove users dialog box, switch the Delete mode radio button to preserve . Click on the Delete button. As a result, the user account is moved to Preserved users . If you need to restore preserved users, see the Restoring users in the IdM Web UI . 3.7. Restoring users in the IdM Web UI IdM (Identity Management) enables you to restore preserved user accounts back to the active state. You can restore a preserved user to an active user or a stage user. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Go to Users Preserved users tab. Click the check-box at the user accounts you want to restore. Click on the Restore button. In the Confirmation dialog box, click on the OK button. The IdM Web UI displays a green confirmation and moves the user accounts to the Active users tab. 3.8. Deleting users in the IdM Web UI Deleting users is an irreversible operation, causing the user accounts to be permanently deleted from the IdM database, including group memberships and passwords. Any external configuration for the user, such as the system account and home directory, is not deleted, but is no longer accessible through IdM. You can delete: Active users - the IdM Web UI offers you with the options: Preserving users temporarily For details, see the Preserving active users in the IdM Web UI . Deleting them permanently Stage users - you can just delete stage users permanently. Preserved users - you can delete preserved users permanently. The following procedure describes deleting active users. Similarly, you can delete user accounts on: The Stage users tab The Preserved users tab Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Go to Users Active users tab. Alternatively, you can delete the user account in the Users Stage users or Users Preserved users . Click the Delete icon. In the Remove users dialog box, switch the Delete mode radio button to delete . Click on the Delete button. The users accounts have been permanently deleted from IdM.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-user-accounts-using-the-idm-web-ui_managing-users-groups-hosts
8.2. Live Snapshots in Red Hat Virtualization
8.2. Live Snapshots in Red Hat Virtualization Snapshots of virtual machine hard disks marked shareable and those that are based on Direct LUN connections are not supported, live or otherwise. Any other virtual machine that is not being cloned or migrated can have a snapshot taken when running, paused, or stopped. When a live snapshot of a virtual machine is initiated, the Manager requests that the SPM host create a new volume for the virtual machine to use. When the new volume is ready, the Manager uses VDSM to communicate with libvirt and qemu on the host running the virtual machine that it should begin using the new volume for virtual machine write operations. If the virtual machine is able to write to the new volume, the snapshot operation is considered a success and the virtual machine stops writing to the volume. If the virtual machine is unable to write to the new volume, the snapshot operation is considered a failure, and the new volume is deleted. The virtual machine requires access to both its current volume and the new one from the time when a live snapshot is initiated until after the new volume is ready, so both volumes are opened with read-write access. Virtual machines with an installed guest agent that supports quiescing can ensure filesystem consistency across snapshots. Registered Red Hat Enterprise Linux guests can install the qemu-guest-agent to enable quiescing before snapshots. If a quiescing compatible guest agent is present on a virtual machine when it a snapshot is taken, VDSM uses libvirt to communicate with the agent to prepare for a snapshot. Outstanding write actions are completed, and then filesystems are frozen before a snapshot is taken. When the snapshot is complete, and libvirt has switched the virtual machine to the new volume for disk write actions, the filesystem is thawed, and writes to disk resume. All live snapshots attempted with quiescing enabled. If the snapshot command fails because there is no compatible guest agent present, the live snapshot is re-initiated without the use-quiescing flag. When a virtual machine is reverted to its pre-snapshot state with quiesced filesystems, it boots cleanly with no filesystem check required. Reverting the snapshot using an un-quiesced filesystem requires a filesystem check on boot.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/live_snapshots_in_red_hat_enterprise_virtualization
Chapter 16. The File Language
Chapter 16. The File Language Abstract The file language is an extension to the simple language, not an independent language in its own right. But the file language extension can only be used in conjunction with File or FTP endpoints. 16.1. When to Use the File Language Overview The file language is an extension to the simple language which is not always available. You can use it under the following circumstances: In a File or FTP consumer endpoint . On exchanges created by a File or FTP consumer . Note The escape character, \ , is not available in the file language. In a File or FTP consumer endpoint There are several URI options that you can set on a File or FTP consumer endpoint, which take a file language expression as their value. For example, in a File consumer endpoint URI you can set the fileName , move , preMove , moveFailed , and sortBy options using a file expression. In a File consumer endpoint, the fileName option acts as a filter, determining which file will actually be read from the starting directory. If a plain text string is specified (for example, fileName=report.txt ), the File consumer reads the same file each time it is updated. You can make this option more dynamic, however, by specifying a simple expression. For example, you could use a counter bean to select a different file each time the File consumer polls the starting directory, as follows: Where the USD{bean:counter.} expression invokes the () method on the bean registered under the ID, counter . The move option is used to move files to a backup location after then have been read by a File consumer endpoint. For example, the following endpoint moves files to a backup directory, after they have been processed: Where the USD{file:name.noext}.bak expression modifies the original file name, replacing the file extension with .bak . You can use the sortBy option to specify the order in which file should be processed. For example, to process files according to the alphabetical order of their file name, you could use the following File consumer endpoint: To process file according to the order in which they were last modified, you could use the following File consumer endpoint: You can reverse the order by adding the reverse: prefix - for example: On exchanges created by a File or FTP consumer When an exchange originates from a File or FTP consumer endpoint, it is possible to apply file language expressions to the exchange throughout the route (as long as the original message headers are not erased). For example, you could define a content-based router, which routes messages according to their file extension, as follows: 16.2. File Variables Overview File variables can be used whenever a route starts with a File or FTP consumer endpoint, which implies that the underlying message body is of java.io.File type. The file variables enable you to access various parts of the file pathname, almost as if you were invoking the methods of the java.io.File class (in fact, the file language extracts the information it needs from message headers that have been set by the File or FTP endpoint). Starting directory Some of file variables return paths that are defined relative to a starting directory , which is just the directory that is specified in the File or FTP endpoint. For example, the following File consumer endpoint has the starting directory, ./filetransfer (a relative path): The following FTP consumer endpoint has the starting directory, ./ftptransfer (a relative path): Naming convention of file variables In general, the file variables are named after corresponding methods on the java.io.File class. For example, the file:absolute variable gives the value that would be returned by the java.io.File.getAbsolute() method. Note This naming convention is not strictly followed, however. For example, there is no such method as java.io.File.getSize() . Table of variables Table 16.1, "Variables for the File Language" shows all of the variable supported by the file language. Table 16.1. Variables for the File Language Variable Type Description file:name String The pathname relative to the starting directory. file:name.ext String The file extension (characters following the last . character in the pathname). Supports file extensions with multiple dots, for example, .tar.gz. file:name.ext.single String The file extension (characters following the last . character in the pathname). If the file extension has mutiple dots, then this expression only returns the last part. file:name.noext String The pathname relative to the starting directory, omitting the file extension. file:name.noext.single String The pathname relative to the starting directory, omitting the file extension. If the file extension has multiple dots, then this expression strips only the last part, and keep the others. file:onlyname String The final segment of the pathname. That is, the file name without the parent directory path. file:onlyname.noext String The final segment of the pathname, omitting the file extension. file:onlyname.noext.single String The final segment of the pathname, omitting the file extension. If the file extension has multiple dots, then this expression strips only the last part, and keep the others. file:ext String The file extension (same as file:name.ext ). file:parent String The pathname of the parent directory, including the starting directory in the path. file:path String The file pathname, including the starting directory in the path. file:absolute Boolean true , if the starting directory was specified as an absolute path; false , otherwise. file:absolute.path String The absolute pathname of the file. file:length Long The size of the referenced file. file:size Long Same as file:length . file:modified java.util.Date Date last modified. 16.3. Examples Relative pathname Consider a File consumer endpoint, where the starting directory is specified as a relative pathname . For example, the following File endpoint has the starting directory, ./filelanguage : Now, while scanning the filelanguage directory, suppose that the endpoint has just consumed the following file: And, finally, assume that the filelanguage directory itself has the following absolute location: Given the preceding scenario, the file language variables return the following values, when applied to the current exchange: Expression Result file:name test/hello.txt file:name.ext txt file:name.noext test/hello file:onlyname hello.txt file:onlyname.noext hello file:ext txt file:parent filelanguage/test file:path filelanguage/test/hello.txt file:absolute false file:absolute.path /workspace/camel/camel-core/target/filelanguage/test/hello.txt Absolute pathname Consider a File consumer endpoint, where the starting directory is specified as an absolute pathname . For example, the following File endpoint has the starting directory, /workspace/camel/camel-core/target/filelanguage : Now, while scanning the filelanguage directory, suppose that the endpoint has just consumed the following file: Given the preceding scenario, the file language variables return the following values, when applied to the current exchange: Expression Result file:name test/hello.txt file:name.ext txt file:name.noext test/hello file:onlyname hello.txt file:onlyname.noext hello file:ext txt file:parent /workspace/camel/camel-core/target/filelanguage/test file:path /workspace/camel/camel-core/target/filelanguage/test/hello.txt file:absolute true file:absolute.path /workspace/camel/camel-core/target/filelanguage/test/hello.txt
[ "file://target/filelanguage/bean/?fileName=USD{bean:counter.next}.txt&delete=true", "file://target/filelanguage/?move=backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak&recursive=false", "file://target/filelanguage/?sortBy=file:name", "file://target/filelanguage/?sortBy=file:modified", "file://target/filelanguage/?sortBy=reverse:file:modified", "<from uri=\"file://input/orders\"/> <choice> <when> <simple>USD{file:ext} == 'txt'</simple> <to uri=\"bean:orderService?method=handleTextFiles\"/> </when> <when> <simple>USD{file:ext} == 'xml'</simple> <to uri=\"bean:orderService?method=handleXmlFiles\"/> </when> <otherwise> <to uri=\"bean:orderService?method=handleOtherFiles\"/> </otherwise> </choice>", "file:filetransfer", "ftp://myhost:2100/ftptransfer", "file://filelanguage", "./filelanguage/test/hello.txt", "/workspace/camel/camel-core/target/filelanguage", "file:///workspace/camel/camel-core/target/filelanguage", "./filelanguage/test/hello.txt" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/File
5.4. Backing up ext2, ext3, or ext4 File Systems
5.4. Backing up ext2, ext3, or ext4 File Systems This procedure describes how to back up the content of an ext4, ext3, or ext2 file system into a file. Prerequisites If the system has been running for a long time, run the e2fsck utility on the partitions before backup: Procedure 5.1. Backing up ext2, ext3, or ext4 File Systems Back up configuration information, including the content of the /etc/fstab file and the output of the fdisk -l command. This is useful for restoring the partitions. To capture this information, run the sosreport or sysreport utilities. For more information about sosreport , see the What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? Kdowledgebase article. Depending on the role of the partition: If the partition you are backing up is an operating system partition, boot your system into the rescue mode. See the Booting to Rescue Mode section of the System Administrator's Guide . When backing up a regular, data partition, unmount it. Although it is possible to back up a data partition while it is mounted, the results of backing up a mounted data partition can be unpredictable. If you need to back up a mounted file system using the dump utility, do so when the file system is not under a heavy load. The more activity is happening on the file system when backing up, the higher the risk of backup corruption is. Use the dump utility to back up the content of the partitions: Replace backup-file with a path to a file where you want the to store the backup. Replace device with the name of the ext4 partition you want to back up. Make sure that you are saving the backup to a directory mounted on a different partition than the partition you are backing up. Example 5.2. Backing up Multiple ext4 Partitions To back up the content of the /dev/sda1 , /dev/sda2 , and /dev/sda3 partitions into backup files stored in the /backup-files/ directory, use the following commands: To do a remote backup, use the ssh utility or configure a password-less ssh login. For more information on ssh and password-less login, see the Using the ssh Utility and Using Key-based Authentication sections of the System Administrator's Guide . For example, when using ssh : Example 5.3. Performing a Remote Backup Using ssh Note that if using standard redirection, you must pass the -f option separately. Additional Resources For more information, see the dump (8) man page.
[ "e2fsck /dev/ device", "dump -0uf backup-file /dev/ device", "dump -0uf /backup-files/sda1.dump /dev/sda1 # dump -0uf /backup-files/sda2.dump /dev/sda2 # dump -0uf /backup-files/sda3.dump /dev/sda3", "dump -0u -f - /dev/ device | ssh root@ remoteserver.example.com dd of= backup-file" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ext4backup
Chapter 6. Creating an instance with a guaranteed minimum bandwidth QoS
Chapter 6. Creating an instance with a guaranteed minimum bandwidth QoS You can create instances that request a guaranteed minimum bandwidth by using a Quality of Service (QoS) policy. QoS policies with a guaranteed minimum bandwidth rule are assigned to ports on a specific physical network. When you create an instance that uses the configured port, the Compute scheduling service selects a host for the instance that satisfies this request. The Compute scheduling service checks the Placement service for the amount of bandwidth reserved by other instances on each physical interface, before selecting a host to deploy an instance on. Limitations/Restrictions You can only assign a guaranteed minimum bandwidth QoS policy when creating a new instance. You cannot assign a guaranteed minimum bandwidth QoS policy to instances that are already running, as the Compute service only updates resource usage for an instance in placement during creation or move operations, which means the minimum bandwidth available to the instance cannot be guaranteed. You cannot live migrate an instance that uses a port that has resource requests, such as a guaranteed minimum bandwidth QoS policy. Run the following command to check if a port has resource requests: Prerequisites A QoS policy is available that has a minimum bandwidth rule. For more information, see Configuring Quality of Service (QoS) policies in the Networking Guide . Procedure List the available QoS policies: Check the rules of each of the available policies to determine which has the required minimum bandwidth: Create a port from the appropriate policy: Create an instance, specifying the NIC port to use: An "ACTIVE" status in the output indicates that you have successfully created the instance on a host that can provide the requested guaranteed minimum bandwidth. 6.1. Removing a guaranteed minimum bandwidth QoS from an instance If you want to lift the guaranteed minimum bandwidth QoS policy restriction from an instance, you can detach the interface. Procedure To detach the interface, enter the following command:
[ "openstack port show <port_name/port_id>", "(overcloud)USD openstack network qos policy list", "-------------------------------------- --------- -------- ---------+ | ID | Name | Shared | Default | Project | -------------------------------------- --------- -------- ---------+ | 6d771447-3cf4-4ef1-b613-945e990fa59f | policy2 | True | False | ba4de51bf7694228a350dd22b7a3dc24 | | 78a24462-e3c1-4e66-a042-71131a7daed5 | policy1 | True | False | ba4de51bf7694228a350dd22b7a3dc24 | | b80acc64-4fc2-41f2-a346-520d7cfe0e2b | policy0 | True | False | ba4de51bf7694228a350dd22b7a3dc24 | -------------------------------------- --------- -------- ---------+", "(overcloud)USD openstack network qos policy show policy0", "------------- ---------------------------------------------------------------------------------------+ | Field | Value | ------------- ---------------------------------------------------------------------------------------+ | description | | | id | b80acc64-4fc2-41f2-a346-520d7cfe0e2b | | is_default | False | | location | cloud= ', project.domain_id=, project.domain_name='Default , project.id= ba4de51bf7694228a350dd22b7a3dc24 , project.name= admin , region_name= regionOne , zone= | | name | policy0 | | project_id | ba4de51bf7694228a350dd22b7a3dc24 | | rules | [{ min_kbps : 100000, direction : egress , id : d46218fe-9218-4e96-952b-9f45a5cb3b3c , qos_policy_id : b80acc64-4fc2-41f2-a346-520d7cfe0e2b , type : minimum_bandwidth }, { min_kbps : 100000, direction : ingress , id : 1202c4e3-a03a-464c-80d5-0bf90bb74c9d , qos_policy_id : b80acc64-4fc2-41f2-a346-520d7cfe0e2b , type : minimum_bandwidth }] | | shared | True | | tags | [] | ------------- ---------------------------------------------------------------------------------------+", "(overcloud)USD openstack port create port-normal-qos --network net0 --qos-policy policy0", "openstack server create --flavor cirros256 --image cirros-0.3.5-x86_64-disk --nic port-id=port-normal-qos --wait qos_instance", "openstack server remove port <vm_name|vm_id> <port_name|port_id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_instances/proc_creating-an-instance-with-a-guaranteed-min-bw-qos_osp
Building applications
Building applications OpenShift Container Platform 4.9 Creating and managing applications on OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/building_applications/index
Appendix B. Revision history
Appendix B. Revision history 0.3-6 Thu Jan 30 2025, Gabriela Fialova ( [email protected] ) Added an Known Issue RHELDOCS-19603 (IdM SSSD) 0.3-5 Wed Dec 4 2024, Gabriela Fialova ( [email protected] ) Updated the Customer Portal labs section Updated the Installation section 0.3-4 Thu Jul 18 2024, Gabriela Fialova ( [email protected] ) Updated the text in a Deprecated Functionality RHELDOCS-17573 (Identity management) 0.3-3 Thu May 16 2024, Brian Angelica ( [email protected] ) Updated Known Issues in JIRA:RHELDOCS-17666 (Virtualization). 0.3-2 Thu May 9 2024, Brian Angelica ( [email protected] ) Updated Tech Preview in BZ#1690207 . 0.3-1 Thu May 9 2024, Gabriela Fialova ( [email protected] ) Updated a known issue BZ#1730502 (Storage). 0.2-9 Thu February 29 2024, Lucie Varakova ( [email protected] ) Added a deprecated functionality JIRA:RHELDOCS-17641 (Networking). 0.2-8 Tue February 13 2024, Lucie Varakova ( [email protected] ) Added a deprecated functionality JIRA:RHELDOCS-17573 (Identity Management). 0.2-7 Fri November 10 2023, Gabriela Fialova ( [email protected] ) Updated the module on Providing Feedback on RHEL Documentation. 0.2-6 Fri October 13 2023, Gabriela Fialova ( [email protected] ) Added a Tech Preview JIRA:RHELDOCS-16861 (Containers). 0.2-5 Fri September 8 2023, Lucie Varakova ( [email protected] ) Added a deprecated functionality release note JIRA:RHELDOCS-16612 (Samba). Updated the Providing feedback on Red Hat documentation section. 0.2-4 Tue September 05 2023, Jaroslav Klech ( [email protected] ) Fixed an ordered list for known issue BZ#2169382 (Networking). 0.2-3 Thu August 24 2023, Lucie Varakova ( [email protected] ) Added a known issue BZ#2214508 (Kernel). 0.2-2 Fri August 4 2023, Lenka Spackova ( [email protected] ) Fixed section for BZ#2225332 . 0.2-1 Tue August 1 2023, Lenka Spackova ( [email protected] ) Added deprecated functionality BZ#2225332 . Improved abstract. 0.2-0 Tue Aug 01 2023, Lucie Varakova ( [email protected] ) Added deprecated functionality JIRA:RHELPLAN-147538 (The web console). 0.1-9 Thu Jun 29 2023, Marc Muehlfeld ( [email protected] ) Added a Technology Preview BZ#1570255 (Kernel). 0.1-8 Fri Jun 16 2023, Lucie Varakova ( [email protected] ) Added a known issue BZ#2214235 (Kernel). 0.1-7 Wed May 10 2023, Jaroslav Klech ( [email protected] ) Added a known issue BZ#2169382 (Networking). 0.1-6 Thu Apr 27 2023, Gabriela Fialova ( [email protected] ) Added a known issue JIRA:RHELPLAN-155168 (Identity Management). 0.1-5 Thu Apr 13 2023, Gabriela Fialova ( [email protected] ) Fixed 2 broken links in DFs and KIs. 0.1-4 Thu Mar 2 2023, Lucie Varakova ( [email protected] ) Updated a new feature BZ#2089409 (Kernel). 0.1-4 Tue Jan 24 2023, Lucie Varakova ( [email protected] ) Added a known issue BZ#2115791 (RHEL in cloud environments). 0.1-3 Thu Dec 08 2022, Marc Muehlfeld ( [email protected] ) Added a known issue BZ#2132754 (Networking). 0.1-2 Tue Nov 08 2022, Lucie Varakova ( [email protected] ) Added new features JIRA:RHELPLAN-137623 and BZ#2130912 (Containers). Added a Technology Preview JIRA:RHELPLAN-137622 (Containers). Added a known issue BZ#2134184 (Virtualization). 0.1-1 Wed Sep 07 2022, Lucie Varakova ( [email protected] ) Added bug fix BZ#2096256 (Networking). Other minor updates. 0.1-0 Fri Aug 19 2022, Lucie Varakova ( [email protected] ) Added bug fix BZ#2108316 (Identity Management). 0.0-9 Fri Aug 05 2022, Lucie Varakova ( [email protected] ) Added known issue BZ#2114981 (Security). 0.0-8 Wed Aug 03 2022, Lenka Spackova ( [email protected] ) Added known issue BZ#2095764 (Software management). 0.0-7 Fri Jul 22 2022, Lucie Varakova ( [email protected] ) Added bug fix BZ#2020494 (File systems and storage). Added known issue BZ#2054656 (Virtualization). Other minor updates. 0.0-6 Mon Jul 11 2022, Lenka Spackova ( [email protected] ) Added bug fix BZ#2056451 (Installer and image creation). Added bug fix BZ#2051890 (Security). Other minor updates. 0.0-5 Jun 08 2022, Lucie Varakova ( [email protected] ) Added new feature BZ#2089409 (Kernel). 0.0-4 May 31 2022, Lucie Varakova ( [email protected] ) Added known issues BZ#2075508 (Security) and BZ#2077770 (Virtualization). Added Technology Previews BZ#1989930 (RHEL for Edge) and JIRA:RHELPLAN-108438 (The web console). Added information about the in-place upgrade from RHEL 8 to RHEL 9 to the In-place upgrade and OS conversion section. Other minor updates. 0.0-3 May 18 2022, Lucie Manaskova ( [email protected] ) Added new feature BZ#2049441 (The web console). Added known issues BZ#2086100 (Kernel) and BZ#2020133 (Virtualization). Other small updates. 0.0-2 May 16 2022, Lucie Manaskova ( [email protected] ) Added bug fix BZ#2014369 (Virtualization). Added known issue BZ#1554735 (Virtualization). Other small updates. 0.0-1 May 11 2022, Lucie Manaskova ( [email protected] ) Release of the Red Hat Enterprise Linux 8.6 Release Notes. 0.0-0 Mar 30 2022, Lucie Manaskova ( [email protected] ) Release of the Red Hat Enterprise Linux 8.6 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/revision_history
Chapter 2. Configuring TLS encryption on a CUPS server
Chapter 2. Configuring TLS encryption on a CUPS server CUPS supports TLS-encrypted connections and, by default, the service enforces encrypted connections for all requests that require authentication. If no certificates are configured, CUPS creates a private key and a self-signed certificate. This is only sufficient if you access CUPS from the local host itself. For a secure connection over the network, use a server certificate that is signed by a certificate authority (CA). Warning Without encryption or with a self-signed certificates, a man-in-the-middle (MITM) attack can disclose, for example: Credentials of administrators when configuring CUPS using the web interface Confidential data when sending print jobs over the network Prerequisites CUPS is configured . You created a private key , and a CA issued a server certificate for it. If an intermediate certificate is required to validate the server certificate, attach the intermediate certificate to the server certificate. The private key is not protected by a password because CUPS provides no option to enter the password when the service reads the key. The Canonical Name ( CN ) or Subject Alternative Name (SAN) field in the certificate matches one of the following: The fully-qualified domain name (FQDN) of the CUPS server An alias that the DNS resolves to the server's IP address The private key and server certificate files use the Privacy Enhanced Mail (PEM) format. Clients trust the CA certificate. If the server runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced . Procedure Edit the /etc/cups/cups-files.conf file, and add the following setting to disable the automatic creation of self-signed certificates: Remove the self-signed certificate and private key: Optional: Display the FQDN of the server: Optional: Display the CN and SAN fields of the certificate: If the CN or SAN fields in the server certificate contains an alias that is different from the server's FQDN, add the ServerAlias parameter to the /etc/cups/cupsd.conf file: In this case, use the alternative name instead of the FQDN in the rest of the procedure. Store the private key and server certificate in the /etc/cups/ssl/ directory, for example: Important CUPS requires that you name the private key <fqdn> .key and the server certificate file <fqdn> .crt . If you use an alias, you must name the files <alias> .key and <alias>.crt . Set secure permissions on the private key that enable only the root user to read this file: Because certificates are part of the communication between a client and the server before they establish a secure connection, any client can retrieve the certificates without authentication. Therefore, you do not need to set strict permissions on the server certificate file. Restore the SELinux context: By default, CUPS enforces encrypted connections only if a task requires authentication, for example when performing administrative tasks on the /admin page in the web interface. To enforce encryption for the entire CUPS server, add Encryption Required to all <Location> directives in the /etc/cups/cupsd.conf file, for example: Restart CUPS: Verification Use a browser, and access https:// <hostname> :631/admin/ . If the connection succeeds, you configured TLS encryption in CUPS correctly. If you configured that encryption is required for the entire server, access http:// <hostname> :631/ . CUPS returns an Upgrade Required error in this case. Troubleshooting Display the systemd journal entries of the cups service: If the journal contains an Unable to encrypt connection: Error while reading file error after you failed to connect to the web interface by using the HTTPS protocol, verify the name of the private key and server certificate file. Additional resources How to configure CUPS to use a CA-signed TLS certificate in RHEL (Red Hat Knowledgebase)
[ "CreateSelfSignedCerts no", "rm /etc/cups/ssl/ <hostname> .crt /etc/cups/ssl/ <hostname> .key", "hostname -f server.example.com", "openssl x509 -text -in /etc/cups/ssl/ server.example.com.crt Certificate: Data: Subject: CN = server.example.com X509v3 extensions: X509v3 Subject Alternative Name: DNS:server.example.com", "ServerAlias alternative_name.example.com", "mv /root/server.key /etc/cups/ssl/ server.example.com.key mv /root/server.crt /etc/cups/ssl/ server.example.com.crt", "chown root:root /etc/cups/ssl/ server.example.com.key chmod 600 /etc/cups/ssl/ server.example.com.key", "restorecon -Rv /etc/cups/ssl/", "<Location /> Encryption Required </Location>", "systemctl restart cups", "journalctl -u cups" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_a_cups_printing_server/configuring-tls-encryption-on-a-cups-server_configuring-printing
Architecture
Architecture Red Hat Advanced Cluster Security for Kubernetes 4.6 System architecture Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/architecture/index
26.2. Configure JGroups (Library Mode)
26.2. Configure JGroups (Library Mode) Red Hat JBoss Data Grid must have an appropriate JGroups configuration in order to operate in clustered mode. Example 26.5. JGroups Programmatic Configuration Example 26.6. JGroups XML Configuration In either programmatic or XML configuration methods, JBoss Data Grid searches for jgroups.xml in the classpath before searching for an absolute path name if it is not found in the classpath. Report a bug 26.2.1. JGroups Transport Protocols A transport protocol is the protocol at the bottom of a protocol stack. Transport Protocols are responsible for sending and receiving messages from the network. Red Hat JBoss Data Grid ships with both UDP and TCP transport protocols. Report a bug 26.2.1.1. The UDP Transport Protocol UDP is a transport protocol that uses: IP multicasting to send messages to all members of a cluster. UDP datagrams for unicast messages, which are sent to a single member. When the UDP transport is started, it opens a unicast socket and a multicast socket. The unicast socket is used to send and receive unicast messages, the multicast socket sends and receives multicast sockets. The physical address of the channel will be the same as the address and port number of the unicast socket. Report a bug 26.2.1.2. The TCP Transport Protocol TCP/IP is a replacement transport for UDP in situations where IP multicast cannot be used, such as operations over a WAN where routers may discard IP multicast packets. TCP is a transport protocol used to send unicast and multicast messages. When sending multicast messages, TCP sends multiple unicast messages. When using TCP, each message to all cluster members is sent as multiple unicast messages, or one to each member. As IP multicasting cannot be used to discover initial members, another mechanism must be used to find initial membership. Red Hat JBoss Data Grid's Hot Rod is a custom TCP client/server protocol. Report a bug 26.2.1.3. Using the TCPPing Protocol Some networks only allow TCP to be used. The pre-configured default-configs/default-jgroups-tcp.xml includes the MPING protocol, which uses UDP multicast for discovery. When UDP multicast is not available, the MPING protocol, has to be replaced by a different mechanism. The recommended alternative is the TCPPING protocol. The TCPPING configuration contains a static list of IP addresses which are contacted for node discovery. Example 26.7. Configure the JGroups Subsystem to Use TCPPING Report a bug 26.2.2. Pre-Configured JGroups Files Red Hat JBoss Data Grid ships with a number of pre-configured JGroups files packaged in infinispan-embedded.jar , and are available on the classpath by default. In order to use one of these files, specify one of these file names instead of using jgroups.xml . The JGroups configuration files shipped with JBoss Data Grid are intended to be used as a starting point for a working project. JGroups will usually require fine-tuning for optimal network performance. The available configurations are: default-configs/default-jgroups-udp.xml default-configs/default-jgroups-tcp.xml default-configs/default-jgroups-ec2.xml Report a bug 26.2.2.1. default-jgroups-udp.xml The default-configs/default-jgroups-udp.xml file is a pre-configured JGroups configuration in Red Hat JBoss Data Grid. The default-jgroups-udp.xml configuration uses UDP as a transport and UDP multicast for discovery. is suitable for large clusters (over 8 nodes). is suitable if using Invalidation or Replication modes. The behavior of some of these settings can be altered by adding certain system properties to the JVM at startup. The settings that can be configured are shown in the following table. Table 26.1. default-jgroups-udp.xml System Properties System Property Description Default Required? jgroups.udp.mcast_addr IP address to use for multicast (both for communications and discovery). Must be a valid Class D IP address, suitable for IP multicast. 228.6.7.8 No jgroups.udp.mcast_port Port to use for multicast socket 46655 No jgroups.udp.ip_ttl Specifies the time-to-live (TTL) for IP multicast packets. The value here refers to the number of network hops a packet is allowed to make before it is dropped 2 No Report a bug 26.2.2.2. default-jgroups-tcp.xml The default-configs/default-jgroups-tcp.xml file is a pre-configured JGroups configuration in Red Hat JBoss Data Grid. The default-jgroups-tcp.xml configuration uses TCP as a transport and UDP multicast for discovery. is generally only used where multicast UDP is not an option. TCP does not perform as well as UDP for clusters of eight or more nodes. Clusters of four nodes or fewer result in roughly the same level of performance for both UDP and TCP. As with other pre-configured JGroups files, the behavior of some of these settings can be altered by adding certain system properties to the JVM at startup. The settings that can be configured are shown in the following table. Table 26.2. default-jgroups-tcp.xml System Properties System Property Description Default Required? jgroups.tcp.address IP address to use for the TCP transport. 127.0.0.1 No jgroups.tcp.port Port to use for TCP socket 7800 No jgroups.udp.mcast_addr IP address to use for multicast (for discovery). Must be a valid Class D IP address, suitable for IP multicast. 228.6.7.8 No jgroups.udp.mcast_port Port to use for multicast socket 46655 No jgroups.udp.ip_ttl Specifies the time-to-live (TTL) for IP multicast packets. The value here refers to the number of network hops a packet is allowed to make before it is dropped 2 No 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 26.2.2.3. default-jgroups-ec2.xml The default-configs/default-jgroups-ec2.xml file is a pre-configured JGroups configuration in Red Hat JBoss Data Grid. The default-jgroups-ec2.xml configuration uses TCP as a transport and S3_PING for discovery. is suitable on Amazon EC2 nodes where UDP multicast isn't available. As with other pre-configured JGroups files, the behavior of some of these settings can be altered by adding certain system properties to the JVM at startup. The settings that can be configured are shown in the following table. Table 26.3. default-jgroups-ec2.xml System Properties System Property Description Default Required? jgroups.tcp.address IP address to use for the TCP transport. 127.0.0.1 No jgroups.tcp.port Port to use for TCP socket 7800 No jgroups.s3.access_key The Amazon S3 access key used to access an S3 bucket Yes jgroups.s3.secret_access_key The Amazon S3 secret key used to access an S3 bucket Yes jgroups.s3.bucket Name of the Amazon S3 bucket to use. Must be unique and must already exist Yes jgroups.s3.pre_signed_delete_url The pre-signed URL to be used for the DELETE operation. Yes jgroups.s3.pre_signed_put_url The pre-signed URL to be used for the PUT operation. Yes jgroups.s3.prefix If set, S3_PING searches for a bucket with a name that starts with the prefix value. No Report a bug
[ "GlobalConfiguration gc = new GlobalConfigurationBuilder() .transport() .defaultTransport() .addProperty(\"configurationFile\",\"jgroups.xml\") .build();", "<infinispan> <global> <transport> <properties> <property name=\"configurationFile\" value=\"jgroups.xml\" /> </properties> </transport> </global> <!-- Additional configuration elements here --> </infinispan>", "<TCP bind_port=\"7800\" /> <TCPPING initial_hosts=\"USD{jgroups.tcpping.initial_hosts:HostA[7800],HostB[7801]}\" port_range=\"1\" />" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-configure_jgroups_library_mode
Chapter 10. Migrating your applications
Chapter 10. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or from the command line . You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. 10.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure internal registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. Additional resources for migration prerequisites Manually exposing a secure registry for OpenShift Container Platform 3 Updating deprecated internal images 10.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 10.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 10.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc sa get-token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster, for example, www.example.apps.cluster.com . You can specify a port. The default port is 5000 . Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. Require SSL verification : Optional: Select this option to verify SSL connections to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 10.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 10.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, the MigCluster custom resource manifest of the source cluster must specify the exposed route of the internal image registry. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository , and click . On the Namespaces page, select the projects to be migrated and click . On the Persistent volumes page, click a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . On the Copy options page, select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: On the Hooks page, click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources MTC file system copy method MTC snapshot copy method 10.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
[ "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc sa get-token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migrating_from_version_3_to_4/migrating-applications-3-4
Chapter 115. KafkaMirrorMaker schema reference
Chapter 115. KafkaMirrorMaker schema reference The type KafkaMirrorMaker has been deprecated. Please use KafkaMirrorMaker2 instead. Property Description spec The specification of Kafka MirrorMaker. KafkaMirrorMakerSpec status The status of Kafka MirrorMaker. KafkaMirrorMakerStatus
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkamirrormaker-reference
Chapter 1. Support policy for Eclipse Temurin
Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.362_release_notes/openjdk8-temurin-support-policy
Chapter 10. Live migration
Chapter 10. Live migration 10.1. Virtual machine live migration 10.1.1. About live migration Live migration is the process of moving a running virtual machine instance (VMI) to another node in the cluster without interrupting the virtual workload or access. If a VMI uses the LiveMigrate eviction strategy, it automatically migrates when the node that the VMI runs on is placed into maintenance mode. You can also manually start live migration by selecting a VMI to migrate. You can use live migration if the following conditions are met: Shared storage with ReadWriteMany (RWX) access mode. Sufficient RAM and network bandwidth. If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU. By default, live migration traffic is encrypted using Transport Layer Security (TLS). 10.1.2. Updating access mode for live migration For live migration to function properly, you must use the ReadWriteMany (RWX) access mode. Use this procedure to update the access mode, if needed. Procedure To set the RWX access mode, run the following oc patch command: USD oc patch -n openshift-cnv \ cm kubevirt-storage-class-defaults \ -p '{"data":{"'USD<STORAGE_CLASS>'.accessMode":"ReadWriteMany"}}' Additional resources: Migrating a virtual machine instance to another node Live migration limiting Customizing the storage profile 10.2. Live migration limits and timeouts Apply live migration limits and timeouts so that migration processes do not overwhelm the cluster. Configure these settings by editing the HyperConverged custom resource (CR). 10.2.1. Configuring live migration limits and timeouts Configure live migration limits and timeouts for the cluster by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace. Procedure Edit the HyperConverged CR and add the necessary live migration parameters. USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 1 In this example, the spec.liveMigrationConfig array contains the default values for each field. Note You can restore the default value for any spec.liveMigrationConfig field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value> to restore the default progressTimeout: 150 . 10.2.2. Cluster-wide live migration limits and timeouts Table 10.1. Migration parameters Parameter Description Default parallelMigrationsPerCluster Number of migrations running in parallel in the cluster. 5 parallelOutboundMigrationsPerNode Maximum number of outbound migrations per node. 2 bandwidthPerMigration Bandwidth limit of each migration, in MiB/s. 0 [1] completionTimeoutPerGiB The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a virtual machine instance with 6GiB memory times out if it has not completed migration in 4800 seconds. If the Migration Method is BlockMigration , the size of the migrating disks is included in the calculation. 800 progressTimeout The migration is canceled if memory copy fails to make progress in this time, in seconds. 150 The default value of 0 is unlimited. 10.3. Migrating a virtual machine instance to another node Manually initiate a live migration of a virtual machine instance to another node using either the web console or the CLI. 10.3.1. Initiating live migration of a virtual machine instance in the web console Migrate a running virtual machine instance to a different node in the cluster. Note The Migrate Virtual Machine action is visible to all users but only admin users can initiate a virtual machine migration. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. You can initiate the migration from this screen, which makes it easier to perform actions on multiple virtual machines in the one screen, or from the Virtual Machine Overview screen where you can view comprehensive details of the selected virtual machine: Click the Options menu at the end of virtual machine and select Migrate Virtual Machine . Click the virtual machine name to open the Virtual Machine Overview screen and click Actions Migrate Virtual Machine . Click Migrate to migrate the virtual machine to another node. 10.3.2. Initiating live migration of a virtual machine instance in the CLI Initiate a live migration of a running virtual machine instance by creating a VirtualMachineInstanceMigration object in the cluster and referencing the name of the virtual machine instance. Procedure Create a VirtualMachineInstanceMigration configuration file for the virtual machine instance to migrate. For example, vmi-migrate.yaml : apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora Create the object in the cluster by running the following command: USD oc create -f vmi-migrate.yaml The VirtualMachineInstanceMigration object triggers a live migration of the virtual machine instance. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted. Additional resources: Monitoring live migration of a virtual machine instance Cancelling the live migration of a virtual machine instance 10.4. Monitoring live migration of a virtual machine instance You can monitor the progress of a live migration of a virtual machine instance from either the web console or the CLI. 10.4.1. Monitoring live migration of a virtual machine instance in the web console For the duration of the migration, the virtual machine has a status of Migrating . This status is displayed in the Virtual Machines tab or in the Virtual Machine Overview screen for the migrating virtual machine. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. 10.4.2. Monitoring live migration of a virtual machine instance in the CLI The status of the virtual machine migration is stored in the Status component of the VirtualMachineInstance configuration. Procedure Use the oc describe command on the migrating virtual machine instance: USD oc describe vmi vmi-fedora Example output ... Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true 10.5. Cancelling the live migration of a virtual machine instance Cancel the live migration so that the virtual machine instance remains on the original node. You can cancel a live migration from either the web console or the CLI. 10.5.1. Cancelling live migration of a virtual machine instance in the web console You can cancel a live migration of the virtual machine instance using the Options menu found on each virtual machine in the Virtualization Virtual Machines tab, or from the Actions menu available on all tabs in the Virtual Machine Overview screen. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. You can cancel the migration from this screen, which makes it easier to perform actions on multiple virtual machines, or from the Virtual Machine Overview screen where you can view comprehensive details of the selected virtual machine: Click the Options menu at the end of virtual machine and select Cancel Virtual Machine Migration . Select a virtual machine name to open the Virtual Machine Overview screen and click Actions Cancel Virtual Machine Migration . Click Cancel Migration to cancel the virtual machine live migration. 10.5.2. Cancelling live migration of a virtual machine instance in the CLI Cancel the live migration of a virtual machine instance by deleting the VirtualMachineInstanceMigration object associated with the migration. Procedure Delete the VirtualMachineInstanceMigration object that triggered the live migration, migration-job in this example: USD oc delete vmim migration-job 10.6. Configuring virtual machine eviction strategy The LiveMigrate eviction strategy ensures that a virtual machine instance is not interrupted if the node is placed into maintenance or drained. Virtual machines instances with this eviction strategy will be live migrated to another node. 10.6.1. Configuring custom virtual machines with the LiveMigration eviction strategy You only need to configure the LiveMigration eviction strategy on custom virtual machines. Common templates have this eviction strategy configured by default. Procedure Add the evictionStrategy: LiveMigrate option to the spec.template.spec section in the virtual machine configuration file. This example uses oc edit to update the relevant snippet of the VirtualMachine configuration file: USD oc edit vm <custom-vm> -n <my-namespace> apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate ... Restart the virtual machine for the update to take effect: USD virtctl restart <custom-vm> -n <my-namespace>
[ "oc patch -n openshift-cnv cm kubevirt-storage-class-defaults -p '{\"data\":{\"'USD<STORAGE_CLASS>'.accessMode\":\"ReadWriteMany\"}}'", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora", "oc create -f vmi-migrate.yaml", "oc describe vmi vmi-fedora", "Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true", "oc delete vmim migration-job", "oc edit vm <custom-vm> -n <my-namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate", "virtctl restart <custom-vm> -n <my-namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/virtualization/live-migration
Chapter 102. Collecting IdM Healthcheck information
Chapter 102. Collecting IdM Healthcheck information Healthcheck has been designed as a manual command line tool which should help you to identify possible problems in Identity Management (IdM). You can create a collection of logs based on the Healthcheck output with 30-day rotation. Prerequisites The Healthcheck tool is only available on RHEL 8.1 or newer 102.1. Healthcheck in IdM The Healthcheck tool in Identity Management (IdM) helps find issues that may impact the health of your IdM environment. Note The Healthcheck tool is a command line tool that can be used without Kerberos authentication. Modules are Independent Healthcheck consists of independent modules which test for: Replication issues Certificate validity Certificate Authority infrastructure issues IdM and Active Directory trust issues Correct file permissions and ownership settings Two output formats Healthcheck generates the following outputs, which you can set using the output-type option: json : Machine-readable output in JSON format (default) human : Human-readable output You can specify a different file destination with the --output-file option. Results Each Healthcheck module returns one of the following results: SUCCESS configured as expected WARNING not an error, but worth keeping an eye on or evaluating ERROR not configured as expected CRITICAL not configured as expected, with a high possibility for impact 102.2. Log rotation Log rotation creates a new log file every day, and the files are organized by date. Since log files are saved in the same directory, you can select a particular log file according to the date. Rotation means that there is configured a number for max number of log files and if the number is exceeded, the newest file rewrites and renames the oldest one. For example, if the rotation number is 30, the thirty-first log file replaces the first (oldest) one. Log rotation reduces voluminous log files and organizes them, which can help with analysis of the logs. 102.3. Configuring log rotation using the IdM Healthcheck Follow this procedure to configure a log rotation with: The systemd timer The crond service The systemd timer runs the Healthcheck tool periodically and generates the logs. The default value is set to 4 a.m. every day. The crond service is used for log rotation. The default log name is healthcheck.log and the rotated logs use the healthcheck.log-YYYYMMDD format. Prerequisites You must execute commands as root. Procedure Enable a systemd timer: Start the systemd timer: Open the /etc/logrotate.d/ipahealthcheck file to configure the number of logs which should be saved. By default, log rotation is set up for 30 days. In the /etc/logrotate.d/ipahealthcheck file, configure the path to the logs. By default, logs are saved in the /var/log/ipa/healthcheck/ directory. In the /etc/logrotate.d/ipahealthcheck file, configure the time for log generation. By default, a log is created daily at 4 a.m. To use log rotation, ensure that the crond service is enabled and running: To start with generating logs, start the IPA healthcheck service: To verify the result, go to /var/log/ipa/healthcheck/ and check if logs are created correctly. 102.4. Changing IdM Healthcheck configuration You can change Healthcheck settings by adding the desired command line options to the /etc/ipahealthcheck/ipahealthcheck.conf file. This can be useful when, for example, you configured a log rotation and want to ensure the logs are in a format suitable for automatic analysis, but do not want to set up a new timer. Note This Healthcheck feature is only available on RHEL 8.7 and newer. After the modification, all logs that Healthcheck creates follow the new settings. These settings also apply to any manual execution of Healthcheck. Note When running Healthcheck manually, settings in the configuration file take precedence over options specified in the command line. For example, if output_type is set to human in the configuration file, specifying json on the command line has no effect. Any command line options you use that are not specified in the configuration file are applied normally. Additional resources Configuring log rotation using the IdM Healthcheck 102.5. Configuring Healthcheck to change the output logs format Follow this procedure to configure Healthcheck with a timer already set up. In this example, you configure Healthcheck to produce logs in a human-readable format and to also include successful results instead of only errors. Prerequisites Your system is running RHEL 8.7 or later. You have root privileges. You have previously configured log rotation on a timer. Procedure Open the /etc/ipahealthcheck/ipahealthcheck.conf file in a text editor. Add options output_type=human and all=True to the [default] section. Save and close the file. Verification Run Healthcheck manually: Go to /var/log/ipa/healthcheck/ and check that the logs are in the correct format. Additional resources Configuring log rotation using the IdM Healthcheck
[ "systemctl enable ipa-healthcheck.timer Created symlink /etc/systemd/system/multi-user.target.wants/ipa-healthcheck.timer -> /usr/lib/systemd/system/ipa-healthcheck.timer.", "systemctl start ipa-healthcheck.timer", "systemctl enable crond systemctl start crond", "systemctl start ipa-healthcheck", "ipa-healthcheck" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/collecting-idm-healthcheck-information_configuring-and-managing-idm
3.4. Configuration examples
3.4. Configuration examples The following examples provide real-world demonstrations of how SELinux complements the Samba server and how full function of the Samba server can be maintained. 3.4.1. Sharing directories you create The following example creates a new directory, and shares that directory through Samba: Run the rpm -q samba samba-common samba-client command to confirm the samba , samba-common , and samba-client packages are installed. If any of these packages are not installed, install them by running the yum install package-name command as the root user. Run the mkdir /myshare command as the root user to create a new top-level directory to share files through Samba. Run the touch /myshare/file1 command as the root user to create an empty file. This file is used later to verify the Samba share mounted correctly. SELinux allows Samba to read and write to files labeled with the samba_share_t type, as long as /etc/samba/smb.conf and Linux permissions are set accordingly. Run the following command as the root user to add the label change to file-context configuration: Run the restorecon -R -v /myshare command as the root user to apply the label changes: Edit /etc/samba/smb.conf as the root user. Add the following to the bottom of this file to share the /myshare/ directory through Samba: A Samba account is required to mount a Samba file system. Run the smbpasswd -a username command as the root user to create a Samba account, where username is an existing Linux user. For example, smbpasswd -a testuser creates a Samba account for the Linux testuser user: Running smbpasswd -a username , where username is the user name of a Linux account that does not exist on the system, causes a Cannot locate Unix account for ' username '! error. Run the service smb start command as the root user to start the Samba service: Run the smbclient -U username -L localhost command to list the available shares, where username is the Samba account added in step 7. When prompted for a password, enter the password assigned to the Samba account in step 7 (version numbers may differ): Run the mkdir /test/ command as the root user to create a new directory. This directory will be used to mount the myshare Samba share. Run the following command as the root user to mount the myshare Samba share to /test/ , replacing username with the user name from step 7: Enter the password for username , which was configured in step 7. Run the ls /test/ command to view the file1 file created in step 3:
[ "~]# semanage fcontext -a -t samba_share_t \"/myshare(/.*)?\"", "~]# restorecon -R -v /myshare restorecon reset /myshare context unconfined_u:object_r:default_t:s0->system_u:object_r:samba_share_t:s0 restorecon reset /myshare/file1 context unconfined_u:object_r:default_t:s0->system_u:object_r:samba_share_t:s0", "[myshare] comment = My share path = /myshare public = yes writable = no", "~]# smbpasswd -a testuser New SMB password: Enter a password Retype new SMB password: Enter the same password again Added user testuser.", "~]# service smb start Starting SMB services: [ OK ]", "~]USD smbclient -U username -L localhost Enter username 's password: Domain=[ HOSTNAME ] OS=[Unix] Server=[Samba 3.4.0-0.41.el6] Sharename Type Comment --------- ---- ------- myshare Disk My share IPCUSD IPC IPC Service (Samba Server Version 3.4.0-0.41.el6) username Disk Home Directories Domain=[ HOSTNAME ] OS=[Unix] Server=[Samba 3.4.0-0.41.el6] Server Comment --------- ------- Workgroup Master --------- -------", "~]# mount //localhost/myshare /test/ -o user= username", "~]USD ls /test/ file1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-samba-configuration_examples
Chapter 11. Uninstalling a cluster on IBM Cloud
Chapter 11. Uninstalling a cluster on IBM Cloud You can remove a cluster that you deployed to IBM Cloud(R). 11.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. You have configured the ccoctl binary. You have installed the IBM Cloud(R) CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the IBM Cloud(R) CLI documentation . Procedure If the following conditions are met, this step is required: The installer created a resource group as part of the installation process. You or one of your applications created persistent volume claims (PVCs) after the cluster was deployed. In which case, the PVCs are not removed when uninstalling the cluster, which might prevent the resource group from being successfully removed. To prevent a failure: Log in to the IBM Cloud(R) using the CLI. To list the PVCs, run the following command: USD ibmcloud is volumes --resource-group-name <infrastructure_id> For more information about listing volumes, see the IBM Cloud(R) CLI documentation . To delete the PVCs, run the following command: USD ibmcloud is volume-delete --force <volume_id> For more information about deleting volumes, see the IBM Cloud(R) CLI documentation . Export the API key that was created as part of the installation process. USD export IC_API_KEY=<api_key> Note You must set the variable name exactly as specified. The installation program expects the variable name to be present to remove the service IDs that were created when the cluster was installed. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Remove the manual CCO credentials that were created for the cluster: USD ccoctl ibmcloud delete-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <cluster_name> Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "ibmcloud is volumes --resource-group-name <infrastructure_id>", "ibmcloud is volume-delete --force <volume_id>", "export IC_API_KEY=<api_key>", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_cloud/uninstalling-cluster-ibm-cloud
Red Hat Enterprise Linux System Roles for SAP
Red Hat Enterprise Linux System Roles for SAP Red Hat Enterprise Linux for SAP Solutions 9 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/red_hat_enterprise_linux_system_roles_for_sap/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/using_the_automation_calculator/providing-feedback
Chapter 3. Clair security scanner
Chapter 3. Clair security scanner 3.1. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMware Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database For information about how Clair does security mapping with the different databases, see Claircore Severity Mapping . 3.1.1. Information about Open Source Vulnerability (OSV) database for Clair Open Source Vulnerability (OSV) is a vulnerability database and monitoring service that focuses on tracking and managing security vulnerabilities in open source software. OSV provides a comprehensive and up-to-date database of known security vulnerabilities in open source projects. It covers a wide range of open source software, including libraries, frameworks, and other components that are used in software development. For a full list of included ecosystems, see defined ecosystems . Clair also reports vulnerability and security information for golang , java , and ruby ecosystems through the Open Source Vulnerability (OSV) database. By leveraging OSV, developers and organizations can proactively monitor and address security vulnerabilities in open source components that they use, which helps to reduce the risk of security breaches and data compromises in projects. For more information about OSV, see the OSV website . 3.2. Clair on OpenShift Container Platform To set up Clair v4 (Clair) on a Red Hat Quay deployment on OpenShift Container Platform, it is recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator installs or upgrades a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically. 3.3. Testing Clair Use the following procedure to test Clair on either a standalone Red Hat Quay deployment, or on an OpenShift Container Platform Operator-based deployment. Prerequisites You have deployed the Clair container image. Procedure Pull a sample image by entering the following command: USD podman pull ubuntu:20.04 Tag the image to your registry by entering the following command: USD sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04 Push the image to your Red Hat Quay registry by entering the following command: USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04 Log in to your Red Hat Quay deployment through the UI. Click the repository name, for example, quayadmin/ubuntu . In the navigation pane, click Tags . Report summary Click the image report, for example, 45 medium , to show a more detailed report: Report details Note In some cases, Clair shows duplicate reports on images, for example, ubi8/nodejs-12 or ubi8/nodejs-16 . This occurs because vulnerabilities with same name are for different packages. This behavior is expected with Clair vulnerability reporting and will not be addressed as a bug. 3.4. Advanced Clair configuration Use the procedures in the following sections to configure advanced Clair settings. 3.4.1. Unmanaged Clair configuration Red Hat Quay users can run an unmanaged Clair configuration with the Red Hat Quay OpenShift Container Platform Operator. This feature allows users to create an unmanaged Clair database, or run their custom Clair configuration without an unmanaged database. An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Operator must communicate with the same database. An unmanaged Clair database can also be used when a user requires a highly-available (HA) Clair database that exists outside of a cluster. 3.4.1.1. Running a custom Clair configuration with an unmanaged Clair database Use the following procedure to set your Clair database to unmanaged. Important You must not use the same externally managed PostgreSQL database for both Red Hat Quay and Clair deployments. Your PostgreSQL database must also not be shared with other workloads, as it might exhaust the natural connection limit on the PostgreSQL side when connection-intensive workloads, like Red Hat Quay or Clair, contend for resources. Additionally, pgBouncer is not supported with Red Hat Quay or Clair, so it is not an option to resolve this issue. Procedure In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: false : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false 3.4.1.2. Configuring a custom Clair database with an unmanaged Clair database Red Hat Quay on OpenShift Container Platform allows users to provide their own Clair database. Use the following procedure to create a custom Clair database. Note The following procedure sets up Clair with SSL/TLS certifications. To view a similar procedure that does not set up Clair with SSL/TLS certifications, see "Configuring a custom Clair database with a managed Clair configuration". Procedure Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret Example Clair config.yaml file indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true Note The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml . It must be specified when configuring your clair-config.yaml . An example clair-config.yaml can be found at Clair on OpenShift config . Add the clair-config.yaml file to your bundle secret, for example: apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key> Note When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace> . For example: Example output 3.4.2. Running a custom Clair configuration with a managed Clair database In some cases, users might want to run a custom Clair configuration with a managed Clair database. This is useful in the following scenarios: When a user wants to disable specific updater resources. When a user is running Red Hat Quay in an disconnected environment. For more information about running Clair in a disconnected environment, see Clair in disconnected environments . Note If you are running Red Hat Quay in an disconnected environment, the airgap parameter of your clair-config.yaml must be set to true . If you are running Red Hat Quay in an disconnected environment, you should disable all updater components. 3.4.2.1. Setting a Clair database to managed Use the following procedure to set your Clair database to managed. Procedure In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: true : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true 3.4.2.2. Configuring a custom Clair database with a managed Clair configuration Red Hat Quay on OpenShift Container Platform allows users to provide their own Clair database. Use the following procedure to create a custom Clair database. Procedure Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret Example Clair config.yaml file indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true Note The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml . It must be specified when configuring your clair-config.yaml . An example clair-config.yaml can be found at Clair on OpenShift config . Add the clair-config.yaml file to your bundle secret, for example: apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> Note When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace> . For example: Example output 3.4.3. Clair in disconnected environments Note Currently, deploying Clair in disconnected environments is not supported on IBM Power and IBM Z. Clair uses a set of components called updaters to handle the fetching and parsing of data from various vulnerability databases. Updaters are set up by default to pull vulnerability data directly from the internet and work for immediate use. However, some users might require Red Hat Quay to run in a disconnected environment, or an environment without direct access to the internet. Clair supports disconnected environments by working with different types of update workflows that take network isolation into consideration. This works by using the clairctl command line interface tool, which obtains updater data from the internet by using an open host, securely transferring the data to an isolated host, and then important the updater data on the isolated host into Clair. Use this guide to deploy Clair in a disconnected environment. Note Currently, Clair enrichment data is CVSS data. Enrichment data is currently unsupported in disconnected environments. For more information about Clair updaters, see "Clair updaters". 3.4.3.1. Setting up Clair in a disconnected OpenShift Container Platform cluster Use the following procedures to set up an OpenShift Container Platform provisioned Clair pod in a disconnected OpenShift Container Platform cluster. 3.4.3.1.1. Installing the clairctl command line utility tool for OpenShift Container Platform deployments Use the following procedure to install the clairctl CLI tool for OpenShift Container Platform deployments. Procedure Install the clairctl program for a Clair deployment in an OpenShift Container Platform cluster by entering the following command: USD oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl Note Unofficially, the clairctl tool can be downloaded Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 3.4.3.1.2. Retrieving and decoding the Clair configuration secret for Clair deployments on OpenShift Container Platform Use the following procedure to retrieve and decode the configuration secret for an OpenShift Container Platform provisioned Clair instance on OpenShift Container Platform. Prerequisites You have installed the clairctl command line utility tool. Procedure Enter the following command to retrieve and decode the configuration secret, and then save it to a Clair configuration YAML: USD oc get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={USD.data['config\.yaml']}" | base64 -d > clair-config.yaml Update the clair-config.yaml file so that the disable_updaters and airgap parameters are set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- 3.4.3.1.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 3.4.3.1.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 3.4.3.1.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform. For example: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 3.4.3.2. Setting up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster Use the following procedures to set up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster. 3.4.3.2.1. Installing the clairctl command line utility tool for a self-managed Clair deployment on OpenShift Container Platform Use the following procedure to install the clairctl CLI tool for self-managed Clair deployments on OpenShift Container Platform. Procedure Install the clairctl program for a self-managed Clair deployment by using the podman cp command, for example: USD sudo podman cp clairv4:/usr/bin/clairctl ./clairctl Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 3.4.3.2.2. Deploying a self-managed Clair container for disconnected OpenShift Container Platform clusters Use the following procedure to deploy a self-managed Clair container for disconnected OpenShift Container Platform clusters. Prerequisites You have installed the clairctl command line utility tool. Procedure Create a folder for your Clair configuration file, for example: USD mkdir /etc/clairv4/config/ Create a Clair configuration file with the disable_updaters parameter set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- Start Clair by using the container image, mounting in the configuration from the file you created: 3.4.3.2.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 3.4.3.2.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 3.4.3.2.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 3.4.4. Mapping repositories to Common Product Enumeration information Note Currently, mapping repositories to Common Product Enumeration information is not supported on IBM Power and IBM Z. Clair's Red Hat Enterprise Linux (RHEL) scanner relies on a Common Product Enumeration (CPE) file to map RPM packages to the corresponding security data to produce matching results. These files are owned by product security and updated daily. The CPE file must be present, or access to the file must be allowed, for the scanner to properly process RPM packages. If the file is not present, RPM packages installed in the container image will not be scanned. Table 3.1. Clair CPE mapping files CPE Link to JSON mapping file repos2cpe Red Hat Repository-to-CPE JSON names2repos Red Hat Name-to-Repos JSON . In addition to uploading CVE information to the database for disconnected Clair installations, you must also make the mapping file available locally: For standalone Red Hat Quay and Clair deployments, the mapping file must be loaded into the Clair pod. For Red Hat Quay on OpenShift Container Platform deployments, you must set the Clair component to unmanaged . Then, Clair must be deployed manually, setting the configuration to load a local copy of the mapping file. 3.4.4.1. Mapping repositories to Common Product Enumeration example configuration Use the repo2cpe_mapping_file and name2repos_mapping_file fields in your Clair configuration to include the CPE JSON mapping files. For example: indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json For more information, see How to accurately match OVAL security data to installed RPMs .
[ "podman pull ubuntu:20.04", "sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04", "sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false", "oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret", "indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true", "apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key>", "oc get pods -n <namespace>", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true", "oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret", "indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true", "apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config>", "oc get pods -n <namespace>", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl", "chmod u+x ./clairctl", "oc get secret -n quay-enterprise example-registry-clair-config-secret -o \"jsonpath={USD.data['config\\.yaml']}\" | base64 -d > clair-config.yaml", "--- indexer: airgap: true --- matcher: disable_updaters: true ---", "./clairctl --config ./config.yaml export-updaters updates.gz", "oc get svc -n quay-enterprise", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h", "oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432", "indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json", "./clairctl --config ./clair-config.yaml import-updaters updates.gz", "sudo podman cp clairv4:/usr/bin/clairctl ./clairctl", "chmod u+x ./clairctl", "mkdir /etc/clairv4/config/", "--- indexer: airgap: true --- matcher: disable_updaters: true ---", "sudo podman run -it --rm --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.12.8", "./clairctl --config ./config.yaml export-updaters updates.gz", "oc get svc -n quay-enterprise", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h", "oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432", "indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json", "./clairctl --config ./clair-config.yaml import-updaters updates.gz", "indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_operator_features/clair-vulnerability-scanner
Chapter 6. Examples
Chapter 6. Examples This chapter demonstrates the use of AMQ JMS through example programs. For more examples, see the AMQ JMS example suite and the Qpid JMS examples . 6.1. Configuring the JNDI context Applications using JMS typically use JNDI to obtain the ConnectionFactory and Destination objects used by the application. This keeps the configuration separate from the program and insulates it from the particular client implementation. For the purpose of using these examples, a file named jndi.properties should be placed on the classpath to configure the JNDI context, as detailed previously . The contents of the jndi.properties file should match what is shown below, which establishes that the client's InitialContextFactory implementation should be used, configures a ConnectionFactory to connect to a local server, and defines a destination queue named queue . 6.2. Sending messages This example first creates a JNDI Context , uses it to look up a ConnectionFactory and Destination , creates and starts a Connection using the factory, and then creates a Session . Then a MessageProducer is created to the Destination , and a message is sent using it. The Connection is then closed, and the program exits. A runnable variant of this Sender example is in the <source-dir>/qpid-jms-examples directory, along with the Hello World example covered previously in Chapter 3, Getting started . Example: Sending messages package org.jboss.amq.example; import javax.jms.Connection; import javax.jms.ConnectionFactory; import javax.jms.DeliveryMode; import javax.jms.Destination; import javax.jms.ExceptionListener; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.MessageProducer; import javax.jms.Session; import javax.jms.TextMessage; import javax.naming.Context; import javax.naming.InitialContext; public class Sender { public static void main(String[] args) throws Exception { try { Context context = new InitialContext(); 1 ConnectionFactory factory = (ConnectionFactory) context.lookup("myFactoryLookup"); Destination destination = (Destination) context.lookup("myDestinationLookup"); 2 Connection connection = factory.createConnection("<username>", "<password>"); connection.setExceptionListener(new MyExceptionListener()); connection.start(); 3 Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); 4 MessageProducer messageProducer = session.createProducer(destination); 5 TextMessage message = session.createTextMessage("Message Text!"); 6 messageProducer.send(message, DeliveryMode.NON_PERSISTENT, Message.DEFAULT_PRIORITY, Message.DEFAULT_TIME_TO_LIVE); 7 connection.close(); 8 } catch (Exception exp) { System.out.println("Caught exception, exiting."); exp.printStackTrace(System.out); System.exit(1); } } private static class MyExceptionListener implements ExceptionListener { @Override public void onException(JMSException exception) { System.out.println("Connection ExceptionListener fired, exiting."); exception.printStackTrace(System.out); System.exit(1); } } } 1 Creates the JNDI Context to look up ConnectionFactory and Destination objects. The configuration is picked up from the jndi.properties file as detailed earlier . 2 The ConnectionFactory and Destination objects are retrieved from the JNDI Context using their lookup names. 3 The factory is used to create the Connection , which then has an ExceptionListener registered and is then started. The credentials given when creating the connection will typically be taken from an appropriate external configuration source, ensuring they remain separate from the application itself and can be updated independently. 4 A non-transacted, auto-acknowledge Session is created on the Connection . 5 The MessageProducer is created to send messages to the Destination . 6 A TextMessage is created with the given content. 7 The TextMessage is sent. It is sent non-persistent, with default priority and no expiration. 8 The Connection is closed. The Session and MessageProducer are closed implicitly. Note that this is only an example. A real-world application would typically use a long-lived MessageProducer and send many messages using it over time. Opening and then closing a Connection , Session , and MessageProducer per message is generally not efficient. 6.3. Receiving messages This example starts by creating a JNDI Context, using it to look up a ConnectionFactory and Destination , creating and starting a Connection using the factory, and then creates a Session . Then a MessageConsumer is created for the Destination , a message is received using it, and its contents are printed to the console. The Connection is then closed and the program exits. The same JNDI configuration is used as in the sending example . An executable variant of this Receiver example is contained within the examples directory of the client distribution, along with the Hello World example covered previously in Chapter 3, Getting started . Example: Receiving messages package org.jboss.amq.example; import javax.jms.Connection; import javax.jms.ConnectionFactory; import javax.jms.Destination; import javax.jms.ExceptionListener; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.MessageConsumer; import javax.jms.Session; import javax.jms.TextMessage; import javax.naming.Context; import javax.naming.InitialContext; public class Receiver { public static void main(String[] args) throws Exception { try { Context context = new InitialContext(); 1 ConnectionFactory factory = (ConnectionFactory) context.lookup("myFactoryLookup"); Destination destination = (Destination) context.lookup("myDestinationLookup"); 2 Connection connection = factory.createConnection("<username>", "<password>"); connection.setExceptionListener(new MyExceptionListener()); connection.start(); 3 Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); 4 MessageConsumer messageConsumer = session.createConsumer(destination); 5 Message message = messageConsumer.receive(5000); 6 if (message == null) { 7 System.out.println("A message was not received within given time."); } else { System.out.println("Received message: " + ((TextMessage) message).getText()); } connection.close(); 8 } catch (Exception exp) { System.out.println("Caught exception, exiting."); exp.printStackTrace(System.out); System.exit(1); } } private static class MyExceptionListener implements ExceptionListener { @Override public void onException(JMSException exception) { System.out.println("Connection ExceptionListener fired, exiting."); exception.printStackTrace(System.out); System.exit(1); } } } 1 Creates the JNDI Context to look up ConnectionFactory and Destination objects. The configuration is picked up from the jndi.properties file as detailed earlier . 2 The ConnectionFactory and Destination objects are retrieved from the JNDI Context using their lookup names. 3 The factory is used to create the Connection , which then has an ExceptionListener registered and is then started. The credentials given when creating the connection will typically be taken from an appropriate external configuration source, ensuring they remain separate from the application itself and can be updated independently. 4 A non-transacted, auto-acknowledge Session is created on the Connection . 5 The MessageConsumer is created to receive messages from the Destination . 6 A call to receive a message is made with a five second timeout. 7 The result is checked, and if a message was received, its contents are printed, or notice that no message was received. The result is cast explicitly to TextMessage as this is what we know the Sender sent. 8 The Connection is closed. The Session and MessageConsumer are closed implicitly. Note that this is only an example. A real-world application would typically use a long-lived MessageConsumer and receive many messages using it over time. Opening and then closing a Connection , Session , and MessageConsumer for each message is generally not efficient.
[ "Configure the InitialContextFactory class to use java.naming.factory.initial = org.apache.qpid.jms.jndi.JmsInitialContextFactory Configure the ConnectionFactory connectionfactory.myFactoryLookup = amqp://localhost:5672 Configure the destination queue.myDestinationLookup = queue", "package org.jboss.amq.example; import javax.jms.Connection; import javax.jms.ConnectionFactory; import javax.jms.DeliveryMode; import javax.jms.Destination; import javax.jms.ExceptionListener; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.MessageProducer; import javax.jms.Session; import javax.jms.TextMessage; import javax.naming.Context; import javax.naming.InitialContext; public class Sender { public static void main(String[] args) throws Exception { try { Context context = new InitialContext(); 1 ConnectionFactory factory = (ConnectionFactory) context.lookup(\"myFactoryLookup\"); Destination destination = (Destination) context.lookup(\"myDestinationLookup\"); 2 Connection connection = factory.createConnection(\"<username>\", \"<password>\"); connection.setExceptionListener(new MyExceptionListener()); connection.start(); 3 Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); 4 MessageProducer messageProducer = session.createProducer(destination); 5 TextMessage message = session.createTextMessage(\"Message Text!\"); 6 messageProducer.send(message, DeliveryMode.NON_PERSISTENT, Message.DEFAULT_PRIORITY, Message.DEFAULT_TIME_TO_LIVE); 7 connection.close(); 8 } catch (Exception exp) { System.out.println(\"Caught exception, exiting.\"); exp.printStackTrace(System.out); System.exit(1); } } private static class MyExceptionListener implements ExceptionListener { @Override public void onException(JMSException exception) { System.out.println(\"Connection ExceptionListener fired, exiting.\"); exception.printStackTrace(System.out); System.exit(1); } } }", "package org.jboss.amq.example; import javax.jms.Connection; import javax.jms.ConnectionFactory; import javax.jms.Destination; import javax.jms.ExceptionListener; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.MessageConsumer; import javax.jms.Session; import javax.jms.TextMessage; import javax.naming.Context; import javax.naming.InitialContext; public class Receiver { public static void main(String[] args) throws Exception { try { Context context = new InitialContext(); 1 ConnectionFactory factory = (ConnectionFactory) context.lookup(\"myFactoryLookup\"); Destination destination = (Destination) context.lookup(\"myDestinationLookup\"); 2 Connection connection = factory.createConnection(\"<username>\", \"<password>\"); connection.setExceptionListener(new MyExceptionListener()); connection.start(); 3 Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); 4 MessageConsumer messageConsumer = session.createConsumer(destination); 5 Message message = messageConsumer.receive(5000); 6 if (message == null) { 7 System.out.println(\"A message was not received within given time.\"); } else { System.out.println(\"Received message: \" + ((TextMessage) message).getText()); } connection.close(); 8 } catch (Exception exp) { System.out.println(\"Caught exception, exiting.\"); exp.printStackTrace(System.out); System.exit(1); } } private static class MyExceptionListener implements ExceptionListener { @Override public void onException(JMSException exception) { System.out.println(\"Connection ExceptionListener fired, exiting.\"); exception.printStackTrace(System.out); System.exit(1); } } }" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_jms_client/examples
Chapter 4. Configuring persistent storage
Chapter 4. Configuring persistent storage 4.1. Persistent storage using AWS Elastic Block Store OpenShift Container Platform supports AWS Elastic Block Store volumes (EBS). You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2 . Some familiarity with Kubernetes and AWS is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. AWS Elastic Block Store volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision AWS EBS storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Important High-availability of storage in the infrastructure is left to the underlying storage provider. For OpenShift Container Platform, automatic migration from AWS EBS in-tree to the Container Storage Interface (CSI) driver is available as a Technology Preview (TP) feature. With migration enabled, volumes provisioned using the existing in-tree driver are automatically migrated to use the AWS EBS CSI driver. For more information, see CSI automatic migration feature . 4.1.1. Creating the EBS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.1.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.1.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.1.4. Maximum number of EBS volumes on a node By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits . The volume limit depends on the instance type. Important As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes. 4.1.5. Encrypting container persistent volumes on AWS with a KMS key Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS. Prerequisites Underlying infrastructure must contain storage. You must create a customer KMS key on AWS. Procedure Create a storage class: USD cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: "true" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF 1 Specifies the name of the storage class. 2 File system that is created on provisioned volumes. 3 Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the encrypted field is set to true , then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation. Create a persistent volume claim (PVC) with the storage class specifying the KMS key: USD cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF Create workload containers to consume the PVC: USD cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF 4.1.6. Additional resources See AWS Elastic Block Store CSI Driver Operator for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 4.2. Persistent storage using Azure OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Azure Disk storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Microsoft Azure Disk 4.2.1. Creating the Azure storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. Procedure In the OpenShift Container Platform console, click Storage Storage Classes . In the storage class overview, click Create Storage Class . Define the desired options on the page that appears. Enter a name to reference the storage class. Enter an optional description. Select the reclaim policy. Select kubernetes.io/azure-disk from the drop down list. Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Premium_LRS , Standard_LRS , StandardSSD_LRS , and UltraSSD_LRS . Enter the kind of account. Valid options are shared , dedicated, and managed . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. Enter additional parameters for the storage class as desired. Click Create to create the storage class. Additional resources Azure Disk Storage Class 4.2.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.2.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.2.4. Machine sets that deploy machines with ultra disks using PVCs You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC. Additional resources Microsoft Azure ultra disks documentation Machine sets that deploy machines on ultra disks using CSI PVCs Machine sets that deploy machines on ultra disks as data disks 4.2.4.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the machine set that you want to provision machines with ultra disks. Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: ... spec: metadata: ... labels: ... disk: ultrassd 1 ... providerSpec: value: ... ultraSSDCapability: Enabled 2 ... 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 These lines enable the use of ultra disks. Create a machine set using the updated configuration by running the following command: USD oc create -f <machine-set-name>.yaml Create a storage class that contains the following YAML definition: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: "2000" 2 diskMbpsReadWrite: "320" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5 1 Specify the name of the storage class. This procedure uses ultra-disk-sc for this value. 2 Specify the number of IOPS for the storage class. 3 Specify the throughput in MBps for the storage class. 4 For Azure Kubernetes Service (AKS) version 1.21 or later, use disk.csi.azure.com . For earlier versions of AKS, use kubernetes.io/azure-disk . 5 Optional: Specify this parameter to wait for the creation of the pod that will use the disk. Create a persistent volume claim (PVC) to reference the ultra-disk-sc storage class that contains the following YAML definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3 1 Specify the name of the PVC. This procedure uses ultra-disk for this value. 2 This PVC references the ultra-disk-sc storage class. 3 Specify the size for the storage class. The minimum value is 4Gi . Create a pod that contains the following YAML definition: apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2 1 Specify the label of the machine set that enables the use of ultra disks. This procedure uses disk.ultrassd for this value. 2 This pod references the ultra-disk PVC. Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk from within a pod, create workload that uses the mount point. Create a YAML file similar to the following example: apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd 4.2.4.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 4.2.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the ContainerCreating state and an alert is triggered. For example, if the additionalCapabilities.ultraSSDEnabled parameter is not set on the machine that backs the node that hosts the pod, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, describe the pod by running the following command: USD oc -n <stuck_pod_namespace> describe pod <stuck_pod_name> 4.3. Persistent storage using Azure File OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically. Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications. Important High availability of storage in the infrastructure is left to the underlying storage provider. Important Azure File volumes use Server Message Block. Important In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Additional resources Azure Files 4.3.1. Create the Azure File share persistent volume claim To create the persistent volume claim, you must first define a Secret object that contains the Azure account and key. This secret is used in the PersistentVolume definition, and will be referenced by the persistent volume claim for use in applications. Prerequisites An Azure File share exists. The credentials to access this share, specifically the storage account and key, are available. Procedure Create a Secret object that contains the Azure File credentials: USD oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2 1 The Azure File storage account name. 2 The Azure File storage account key. Create a PersistentVolume object that references the Secret object you created: apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false 1 The name of the persistent volume. 2 The size of this persistent volume. 3 The name of the secret that contains the Azure File share credentials. 4 The name of the Azure File share. Create a PersistentVolumeClaim object that maps to the persistent volume you created: apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claim1" 1 spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" 2 storageClassName: azure-file-sc 3 volumeName: "pv0001" 4 1 The name of the persistent volume claim. 2 The size of this persistent volume claim. 3 The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the PersistentVolume definition. 4 The name of the existing PersistentVolume object that references the Azure File share. 4.3.2. Mount the Azure File share in a pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying Azure File share. Procedure Create a pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... volumeMounts: - mountPath: "/data" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3 1 The name of the pod. 2 The path to mount the Azure File share inside the pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the PersistentVolumeClaim object that has been previously created. 4.4. Persistent storage using Cinder OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed. Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Cinder storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Additional resources For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder . 4.4.1. Manual provisioning with Cinder Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Prerequisites OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP) Cinder volume ID 4.4.1.1. Creating the persistent volume You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform: Procedure Save your object definition to a file. cinder-persistentvolume.yaml apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" cinder: 3 fsType: "ext3" 4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5 1 The name of the volume that is used by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 Indicates cinder for Red Hat OpenStack Platform (RHOSP) Cinder volumes. 4 The file system that is created when the volume is mounted for the first time. 5 The Cinder volume to use. Important Do not change the fstype parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure. Create the object definition file you saved in the step. USD oc create -f cinder-persistentvolume.yaml 4.4.1.2. Persistent volume formatting You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use. Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. 4.4.1.3. Cinder volume security If you use Cinder PVs in your application, configure security for their deployment configurations. Prerequisites An SCC must be created that uses the appropriate fsGroup strategy. Procedure Create a service account and add it to the SCC: USD oc create serviceaccount <service_account> USD oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project> In your application's deployment configuration, provide the service account name and securityContext : apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod that the controller creates. 4 The labels on the pod. They must include labels from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 6 Specifies the service account you created. 7 Specifies an fsGroup for the pods. 4.5. Persistent storage using Fibre Channel OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed. Important Persistent storage using Fibre Channel is not supported on ARM architecture based infrastructures. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Using Fibre Channel devices 4.5.1. Provisioning To provision Fibre Channel volumes using the PersistentVolume API the following must be available: The targetWWNs (array of Fibre Channel target's World Wide Names). A valid LUN number. The filesystem type. A persistent volume and a LUN have a one-to-one mapping between them. Prerequisites Fibre Channel LUNs must exist in the underlying infrastructure. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4 1 World wide identifiers (WWIDs). Either FC wwids or a combination of FC targetWWNs and lun must be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data ( page 0x83 ) or Unit Serial Number ( page 0x80 ). FC WWIDs are identified as /dev/disk/by-id/ to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems. 2 3 Fibre Channel WWNs are identified as /dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#> , but you do not need to provide any part of the path leading up to the WWN , including the 0x , and anything after, including the - (hyphen). Important Changing the value of the fstype parameter after the volume has been formatted and provisioned can result in data loss and pod failure. 4.5.1.1. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.5.1.2. Fibre Channel volume security Users request storage with a persistent volume claim. This claim only lives in the user's namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail. Each Fibre Channel LUN must be accessible by all nodes in the cluster. 4.6. Persistent storage using FlexVolume Important FlexVolume is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers. To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications. Pods interact with FlexVolume drivers through the flexvolume in-tree plugin. Additional resources Expanding persistent volumes 4.6.1. About FlexVolume drivers A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a PersistentVolume object with flexVolume as the source. Important Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume. 4.6.2. FlexVolume driver example The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data. The FlexVolume driver contains: All flexVolume.options . Some options from flexVolume prefixed by kubernetes.io/ , such as fsType and readwrite . The content of the referenced secret, if specified, prefixed by kubernetes.io/secret/ . FlexVolume driver JSON input example { "fooServer": "192.168.0.1:1234", 1 "fooVolumeName": "bar", "kubernetes.io/fsType": "ext4", 2 "kubernetes.io/readwrite": "ro", 3 "kubernetes.io/secret/<key name>": "<key value>", 4 "kubernetes.io/secret/<another key name>": "<another key value>", } 1 All options from flexVolume.options . 2 The value of flexVolume.fsType . 3 ro / rw based on flexVolume.readOnly . 4 All keys and their values from the secret referenced by flexVolume.secretRef . OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation. FlexVolume driver default output example { "status": "<Success/Failure/Not supported>", "message": "<Reason for success/failure>" } Exit code of the driver should be 0 for success and 1 for error. Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation. 4.6.3. Installing FlexVolume drivers FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required. Prerequisites FlexVolume drivers must implement these operations: init Initializes the driver. It is called during initialization of all nodes. Arguments: none Executed on: node Expected output: default JSON mount Mounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmount Unmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting. Arguments: <mount-dir> Executed on: node Expected output: default JSON mountdevice Mounts a volume's device to a directory where individual pods can then bind mount. This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmountdevice Unmounts a volume's device from a directory. Arguments: <mount-dir> Executed on: node Expected output: default JSON All other operations should return JSON with {"status": "Not supported"} and exit code 1 . Procedure To install the FlexVolume driver: Ensure that the executable file exists on all nodes in the cluster. Place the executable file at the volume plugin path: /etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver> . For example, to install the FlexVolume driver for the storage foo , place the executable file at: /etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo . 4.6.4. Consuming storage using FlexVolume drivers Each PersistentVolume object in OpenShift Container Platform represents one storage asset in the storage back-end, such as a volume. Procedure Use the PersistentVolume object to reference the installed storage. Persistent volume object definition using FlexVolume drivers example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: "ext4" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar 1 The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage. 2 The amount of storage allocated to this volume. 3 The name of the driver. This field is mandatory. 4 The file system that is present on the volume. This field is optional. 5 The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional. 6 The read-only flag. This field is optional. 7 The additional options for the FlexVolume driver. In addition to the flags specified by the user in the options field, the following flags are also passed to the executable: Note Secrets are passed only to mount or unmount call-outs. 4.7. Persistent storage using GCE Persistent Disk OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. GCE Persistent Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision gcePD storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources GCE Persistent Disk 4.7.1. Creating the GCE storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.7.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.7.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.8. Persistent storage using hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it. Important The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node. 4.8.1. Overview OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster. In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning. A hostPath volume must be provisioned statically. Important Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host . The following example shows the / directory from the host being mounted into the container at /host . apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: '' 4.8.2. Statically provisioning hostPath volumes A pod that uses a hostPath volume must be referenced by manual (static) provisioning. Procedure Define the persistent volume (PV). Create a file, pv.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: "/mnt/data" 4 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 Used to bind persistent volume claim requests to this persistent volume. 3 The volume can be mounted as read-write by a single node. 4 The configuration file specifies that the volume is at /mnt/data on the cluster's node. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system. It is safe to mount the host by using /host . Create the PV from the file: USD oc create -f pv.yaml Define the persistent volume claim (PVC). Create a file, pvc.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual Create the PVC from the file: USD oc create -f pvc.yaml 4.8.3. Mounting the hostPath share in a privileged pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying hostPath share. Procedure Create a privileged pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged ... securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4 1 The name of the pod. 2 The pod must run as privileged to access the node's storage. 3 The path to mount the host path share inside the privileged pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 4 The name of the PersistentVolumeClaim object that has been previously created. 4.9. Persistent storage using iSCSI You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI . Some familiarity with Kubernetes and iSCSI is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Important High-availability of storage in the infrastructure is left to the underlying storage provider. Important When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports 860 and 3260 . Important Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the iscsi-initiator-utils package and configuring their initiator name in /etc/iscsi/initiatorname.iscsi . The iscsi-initiator-utils package is already installed on deployments that use Red Hat Enterprise Linux CoreOS (RHCOS). For more information, see Managing Storage Devices . 4.9.1. Provisioning Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume API. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4' 4.9.2. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example, 10Gi ) and be matched with a corresponding volume of equal or greater capacity. 4.9.3. iSCSI volume security Users request storage with a PersistentVolumeClaim object. This claim only lives in the user's namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim across a namespace causes the pod to fail. Each iSCSI LUN must be accessible by all nodes in the cluster. 4.9.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3 1 Enable CHAP authentication of iSCSI discovery. 2 Enable CHAP authentication of iSCSI session. 3 Specify name of Secrets object with user name + password. This Secret object must be available in all namespaces that can use the referenced volume. 4.9.4. iSCSI multipathing For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail. To specify multi-paths in the pod specification, use the portals field. For example: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false 1 Add additional target portals using the portals field. 4.9.5. iSCSI custom initiator IQN Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs. To specify a custom initiator IQN, use initiatorName field. apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false 1 Specify the name of the initiator. 4.10. Persistent storage using local volumes OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface. Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications. Note Local volumes can only be used as a statically created persistent volume. 4.10.1. Installing the Local Storage Operator The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster. Prerequisites Access to the OpenShift Container Platform web console or command-line interface (CLI). Procedure Create the openshift-local-storage project: USD oc adm new-project openshift-local-storage Optional: Allow local storage creation on infrastructure nodes. You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring. You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes. To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command: USD oc annotate namespace openshift-local-storage openshift.io/node-selector='' Optional: Allow local storage to run on the management pool of CPUs in single-node deployment. Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the management pool. Perform this step on single-node installations that use management workload partitioning. To allow Local Storage Operator to run on the management CPU pool, run following commands: USD oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management' From the UI To install the Local Storage Operator from the web console, follow these steps: Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Local Storage into the filter box to locate the Local Storage Operator. Click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-local-storage from the drop-down menu. Adjust the values for Update Channel and Approval Strategy to the values that you want. Click Install . Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console. From the CLI Install the Local Storage Operator from the CLI. Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml : Example openshift-local-storage.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace 1 The user approval policy for an install plan. Create the Local Storage Operator object by entering the following command: USD oc apply -f openshift-local-storage.yaml At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verify local storage installation by checking that all pods and the Local Storage Operator have been created: Check that all the required pods have been created: USD oc -n openshift-local-storage get pods Example output NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project: USD oc get csvs -n openshift-local-storage Example output NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded After all checks have passed, the Local Storage Operator is installed successfully. 4.10.2. Provisioning local volumes by using the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Prerequisites The Local Storage Operator is installed. You have a local disk that meets the following conditions: It is attached to a node. It is not mounted. It does not contain partitions. Procedure Create the local volume resource. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs). Example: Filesystem apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: "local-sc" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 4 The volume mode, either Filesystem or Block , that defines the type of local volumes. Note A raw block volume ( volumeMode: Block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. 5 The file system that is created when the local volume is mounted for the first time. 6 The path containing a list of local storage devices to choose from. 7 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as /dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note If you are running OpenShift Container Platform on IBM Z with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition. Example: Block apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "localblock-sc" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. 4 The volume mode, either Filesystem or Block , that defines the type of local volumes. 5 The path containing a list of local storage devices to choose from. 6 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note If you are running OpenShift Container Platform on IBM Z with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <local-volume>.yaml Verify that the provisioner was created and that the corresponding daemon sets were created: USD oc get all -n openshift-local-storage Example output NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid. Verify that the persistent volumes were created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Important Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation. 4.10.3. Provisioning local volumes without the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Important Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs. Prerequisites Local disks are attached to the OpenShift Container Platform nodes. Procedure Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml , with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple PVs. example-pv-filesystem.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode . Note A raw block volume ( volumeMode: block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. example-pv-block.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <example-pv>.yaml Verify that the local PV was created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h 4.10.4. Creating the local volume persistent volume claim Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod. Prerequisites Persistent volumes have been created using the local volume provisioner. Procedure Create the PVC using the corresponding storage class: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4 1 Name of the PVC. 2 The type of the PVC. Defaults to Filesystem . 3 The amount of storage available to the PVC. 4 Name of the storage class required by the claim. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pvc>.yaml 4.10.5. Attach the local claim After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource. Prerequisites A persistent volume claim exists in the same namespace. Procedure Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod: apiVersion: v1 kind: Pod spec: # ... containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3 # ... 1 The name of the volume to mount. 2 The path inside the pod where the volume is mounted. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the existing persistent volume claim to use. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pod>.yaml 4.10.6. Automating discovery and provisioning for local storage devices The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices. Important Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment. Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices. Warning Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Creating multiple instances of a LocalVolumeSet that target a node more than once is not supported. Prerequisites You have cluster administrator permissions. You have installed the Local Storage Operator. You have attached local disks to OpenShift Container Platform nodes. You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI). Procedure To enable automatic discovery of local devices from the web console: In the Administrator perspective, navigate to Operators Installed Operators and click on the Local Volume Discovery tab. Click Create Local Volume Discovery . Select either All nodes or Select nodes , depending on whether you want to discover available disks on all or specific nodes. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Click Create . A local volume discovery instance named auto-discover-devices is displayed. To display a continuous list of available devices on a node: Log in to the OpenShift Container Platform web console. Navigate to Compute Nodes . Click the node name that you want to open. The "Node Details" page is displayed. Select the Disks tab to display the list of the selected devices. The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode. To automatically provision local volumes for the discovered devices from the web console: Navigate to Operators Installed Operators and select Local Storage from the list of Operators. Select Local Volume Set Create Local Volume Set . Enter a volume set name and a storage class name. Choose All nodes or Select nodes to apply filters accordingly. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create . A message displays after several minutes, indicating that the "Operator reconciled successfully." Alternatively, to provision local volumes for the discovered devices from the CLI: Create an object YAML file to define the local volume set, such as local-volume-set.yaml , as shown in the following example: apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM 1 Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 2 When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices. Create the local volume set object: USD oc apply -f local-volume-set.yaml Verify that the local persistent volumes were dynamically provisioned based on the storage class: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m Note Results are deleted after they are removed from the node. Symlinks must be manually removed. 4.10.7. Using tolerations with Local Storage Operator pods Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes. You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node. Important Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect . An operator allows you to leave one of these parameters empty. Prerequisites The Local Storage Operator is installed. Local disks are attached to OpenShift Container Platform nodes with a taint. Tainted nodes are expected to provision local storage. Procedure To configure local volumes for scheduling on tainted nodes: Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example: apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: "localstorage" 3 storageClassDevices: - storageClassName: "localblock-sc" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg 1 Specify the key that you added to the node. 2 Specify the Equal operator to require the key / value parameters to match. If operator is Exists , the system checks that the key exists and ignores the value. If operator is Equal , then the key and value must match. 3 Specify the value local of the tainted node. 4 The volume mode, either Filesystem or Block , defining the type of the local volumes. 5 The path containing a list of local storage devices to choose from. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example: spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints. 4.10.8. Local Storage Operator Metrics OpenShift Container Platform provides the following metrics for the Local Storage Operator: lso_discovery_disk_count : total number of discovered devices on each node lso_lvset_provisioned_PV_count : total number of PVs created by LocalVolumeSet objects lso_lvset_unmatched_disk_count : total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria lso_lvset_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolumeSet object criteria lso_lv_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolume object criteria lso_lv_provisioned_PV_count : total number of provisioned PVs for LocalVolume To use these metrics, be sure to: Enable support for monitoring when installing the Local Storage Operator. When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the operator-metering=true label to the namespace. For more information about metrics, see Managing metrics . 4.10.9. Deleting the Local Storage Operator resources 4.10.9.1. Removing a local volume or local volume set Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed. Note The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource. Prerequisites The persistent volume must be in a Released or Available state. Warning Deleting a persistent volume that is still in use can result in data loss or corruption. Procedure Edit the previously created local volume to remove any unwanted disks. Edit the cluster resource: USD oc edit localvolume <name> -n openshift-local-storage Navigate to the lines under devicePaths , and delete any representing unwanted disks. Delete any persistent volumes created. USD oc delete pv <pv-name> Delete directory and included symlinks on the node. Warning The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability. USD oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1 1 The name of the storage class used to create the local volumes. 4.10.9.2. Uninstalling the Local Storage Operator To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project. Warning Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator's removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources. Prerequisites Access to the OpenShift Container Platform web console. Procedure Delete any local volume resources installed in the project, such as localvolume , localvolumeset , and localvolumediscovery : USD oc delete localvolume --all --all-namespaces USD oc delete localvolumeset --all --all-namespaces USD oc delete localvolumediscovery --all --all-namespaces Uninstall the Local Storage Operator from the web console. Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Type Local Storage into the filter box to locate the Local Storage Operator. Click the Options menu at the end of the Local Storage Operator. Click Uninstall Operator . Click Remove in the window that appears. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command: USD oc delete pv <pv-name> Delete the openshift-local-storage project: USD oc delete project openshift-local-storage 4.11. Persistent storage using NFS OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts. Additional resources Mounting NFS shares 4.11.1. Provisioning Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required. Procedure Create an object definition for the PV: apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7 1 The name of the volume. This is the PV identity in various oc <command> pod commands. 2 The amount of storage allocated to this volume. 3 Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the accessModes . 4 The volume type being used, in this case the nfs plugin. 5 The path that is exported by the NFS server. 6 The hostname or IP address of the NFS server. 7 The reclaim policy for the PV. This defines what happens to a volume when released. Note Each NFS volume must be mountable by all schedulable nodes in the cluster. Verify that the PV was created: USD oc get pv Example output NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s Create a persistent volume claim that binds to the new PV: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: "" 1 The access modes do not enforce security, but rather act as labels to match a PV to a PVC. 2 This claim looks for PVs offering 5Gi or greater capacity. Verify that the persistent volume claim was created: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m 4.11.2. Enforcing disk quotas You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume's server and path is up to the administrator. Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.11.3. NFS volume security This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux. Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the volumes section of their Pod definition. The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plugin mounts the container's NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior. As an example, if the target NFS directory appears on the NFS server as: USD ls -lZ /opt/nfs -d Example output drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs USD id nfsnobody Example output uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody) Then the container must match SELinux labels, and either run with a UID of 65534 , the nfsnobody owner, or with 5555 in its supplemental groups to access the directory. Note The owner ID of 65534 is used as an example. Even though NFS's root_squash maps root , uid 0 , to nfsnobody , uid 65534 , NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports. 4.11.3.1. Group IDs The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup value in the securityContext of the pod. Note To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs. Because the group ID on the example target NFS directory is 5555 , the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example: spec: containers: - name: ... securityContext: 1 supplementalGroups: [5555] 2 1 securityContext must be defined at the pod level, not under a specific container. 2 An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated. Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny , meaning that any supplied group ID is accepted without range checking. As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.11.3.2. User IDs User IDs can be defined in the container image or in the Pod definition. Note It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs. In the example target NFS directory shown above, the container needs its UID set to 65534 , ignoring group IDs for the moment, so the following can be added to the Pod definition: spec: containers: 1 - name: ... securityContext: runAsUser: 65534 2 1 Pods contain a securityContext definition specific to each container and a pod's securityContext which applies to all containers defined in the pod. 2 65534 is the nfsnobody user. Assuming that the project is default and the SCC is restricted , the user ID of 65534 as requested by the pod is not allowed. Therefore, the pod fails for the following reasons: It requests 65534 as its user ID. All SCCs available to the pod are examined to see which SCC allows a user ID of 65534 . While all policies of the SCCs are checked, the focus here is on user ID. Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range checking is required. 65534 is not included in the SCC or project's user ID range. It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.11.3.3. SELinux Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default. For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure. Prerequisites The container-selinux package must be installed. This package provides the virt_use_nfs SELinux boolean. Procedure Enable the virt_use_nfs boolean using the following command. The -P option makes this boolean persistent across reboots. # setsebool -P virt_use_nfs 1 4.11.3.4. Export settings To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions: Every export must be exported using the following format: /<example_fs> *(rw,root_squash) The firewall must be configured to allow traffic to the mount point. For NFSv4, configure the default port 2049 ( nfs ). NFSv4 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT For NFSv3, there are three ports to configure: 2049 ( nfs ), 20048 ( mountd ), and 111 ( portmapper ). NFSv3 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container's primary UID, or supply the pod group access using supplementalGroups , as shown in the group IDs above. 4.11.4. Reclaiming resources NFS implements the OpenShift Container Platform Recyclable plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume. By default, PVs are set to Retain . Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original. For example, the administrator creates a PV named nfs1 : apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" The user creates PVC1 , which binds to nfs1 . The user then deletes PVC1 , releasing claim to nfs1 . This results in nfs1 being Released . If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name: apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss. 4.11.5. Additional configuration and troubleshooting Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply: NFSv4 mount incorrectly shows all files with ownership of nobody:nobody Could be attributed to the ID mapping settings, found in /etc/idmapd.conf on your NFS. See this Red Hat Solution . Disabling ID mapping on NFSv4 On both the NFS client and server, run: # echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping 4.12. Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. Red Hat OpenShift Data Foundation provides its own documentation library. The complete set of Red Hat OpenShift Data Foundation documentation is available at https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation . Important OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide . 4.13. Persistent storage using VMware vSphere volumes OpenShift Container Platform allows use of VMware vSphere's Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed. VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image. Note OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision vSphere storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Additional resources VMware vSphere 4.13.1. Dynamically provisioning VMware vSphere volumes Dynamically provisioning VMware vSphere volumes is the recommended method. 4.13.2. Prerequisites An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support. You can use either of the following procedures to dynamically provision these volumes using the default storage class. 4.13.2.1. Dynamically provisioning VMware vSphere volumes using the UI OpenShift Container Platform installs a default storage class, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the thin storage class. Enter a unique name for the storage claim. Select the access mode to determine the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.13.2.2. Dynamically provisioning VMware vSphere volumes using the CLI OpenShift Container Platform installs a default StorageClass, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure (CLI) You can define a VMware vSphere PersistentVolumeClaim by creating a file, pvc.yaml , with the following contents: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml 4.13.3. Statically provisioning VMware vSphere volumes To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods: Create using vmkfstools . Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume: USD vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk Create using vmware-diskmanager : USD shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk Create a persistent volume that references the VMDKs. Create a file, pv1.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: "[datastore1] volumes/myDisk" 4 fsType: ext4 5 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 The volume type used, with vsphereVolume for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. 4 The existing VMDK volume to use. If you used vmkfstools , you must enclose the datastore name in square brackets, [] , in the volume definition, as shown previously. 5 The file system type to mount. For example, ext4, xfs, or other file systems. Important Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure. Create the PersistentVolume object from the file: USD oc create -f pv1.yaml Create a persistent volume claim that maps to the persistent volume you created in the step. Create a file, pvc1.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: "1Gi" 3 volumeName: pv1 4 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. 4 The name of the existing persistent volume. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc1.yaml 4.13.3.1. Formatting VMware vSphere volumes Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the fsType parameter value in the PersistentVolume (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system. Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs.
[ "cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF", "cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF", "cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2", "oc create -f <machine-set-name>.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: \"2000\" 2 diskMbpsReadWrite: \"320\" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3", "apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - \"sleep\" - \"infinity\" volumeMounts: - mountPath: \"/mnt/azure\" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>", "oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false", "apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5", "oc create -f cinder-persistentvolume.yaml", "oc create serviceaccount <service_account>", "oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>", "apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4", "{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }", "{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar", "\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"", "apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''", "apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4", "oc create -f pv.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual", "oc create -f pvc.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false", "oc adm new-project openshift-local-storage", "oc annotate namespace openshift-local-storage openshift.io/node-selector=''", "oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc apply -f openshift-local-storage.yaml", "oc -n openshift-local-storage get pods", "NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m", "oc get csvs -n openshift-local-storage", "NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"localblock-sc\" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6", "oc create -f <local-volume>.yaml", "oc get all -n openshift-local-storage", "NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "oc create -f <example-pv>.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4", "oc create -f <local-pvc>.yaml", "apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3", "oc create -f <local-pod>.yaml", "apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM", "oc apply -f local-volume-set.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"localblock-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg", "spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists", "oc edit localvolume <name> -n openshift-local-storage", "oc delete pv <pv-name>", "oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1", "oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces", "oc delete pv <pv-name>", "oc delete project openshift-local-storage", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7", "oc get pv", "NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m", "ls -lZ /opt/nfs -d", "drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs", "id nfsnobody", "uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)", "spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2", "spec: containers: 1 - name: securityContext: runAsUser: 65534 2", "setsebool -P virt_use_nfs 1", "/<example_fs> *(rw,root_squash)", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3", "oc create -f pvc.yaml", "vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk", "shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk", "apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5", "oc create -f pv1.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4", "oc create -f pvc1.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/storage/configuring-persistent-storage
Appendix A. Health messages for the Ceph File System
Appendix A. Health messages for the Ceph File System Cluster health checks The Ceph Monitor daemons generate health messages in response to certain states of the Metadata Server (MDS). Below is the list of the health messages and their explanation: mds rank(s) <ranks> have failed One or more MDS ranks are not currently assigned to any MDS daemon. The storage cluster will not recover until a suitable replacement daemon starts. mds rank(s) <ranks> are damaged One or more MDS ranks has encountered severe damage to its stored metadata, and cannot start again until the metadata is repaired. mds cluster is degraded One or more MDS ranks are not currently up and running, clients might pause metadata I/O until this situation is resolved. This includes ranks being failed or damaged, and includes ranks which are running on an MDS but are not in the active state yet - for example, ranks in the replay state. mds <names> are laggy The MDS daemons are supposed to send beacon messages to the monitor in an interval specified by the mds_beacon_interval option, the default is 4 seconds. If an MDS daemon fails to send a message within the time specified by the mds_beacon_grace option, the default is 15 seconds. The Ceph Monitor marks the MDS daemon as laggy and automatically replaces it with a standby daemon if any is available. Daemon-reported health checks The MDS daemons can identify a variety of unwanted conditions, and return them in the output of the ceph status command. These conditions have human readable messages, and also have a unique code starting MDS_HEALTH , which appears in JSON output. Below is the list of the daemon messages, their codes, and explanation. "Behind on trimming... " Code: MDS_HEALTH_TRIM CephFS maintains a metadata journal that is divided into log segments. The length of journal (in number of segments) is controlled by the mds_log_max_segments setting. When the number of segments exceeds that setting, the MDS starts writing back metadata so that it can remove (trim) the oldest segments. If this process is too slow, or a software bug is preventing trimming, then this health message appears. The threshold for this message to appear is for the number of segments to be double mds_log_max_segments . Note Increasing mds_log_max_segments is recommended if the trim warning is encountered. However, ensure that this configuration is reset back to its default when the cluster health recovers and the trim warning is seen no more. It is recommended to set mds_log_max_segments to 256 to allow the MDS to catch up with trimming. "Client <name> failing to respond to capability release" Code: MDS_HEALTH_CLIENT_LATE_RELEASE, MDS_HEALTH_CLIENT_LATE_RELEASE_MANY CephFS clients are issued capabilities by the MDS. The capabilities work like locks. Sometimes, for example, when another client needs access, the MDS requests clients to release their capabilities. If the client is unresponsive, it might fail to do so promptly, or fail to do so at all. This message appears if a client has taken a longer time to comply than the time specified by the mds_revoke_cap_timeout option (default is 60 seconds). "Client <name> failing to respond to cache pressure" Code: MDS_HEALTH_CLIENT_RECALL, MDS_HEALTH_CLIENT_RECALL_MANY Clients maintain a metadata cache. Items, such as inodes, in the client cache, are also pinned in the MDS cache. When the MDS needs to shrink its cache to stay within its own cache size limits, the MDS sends messages to clients to shrink their caches too. If a client is unresponsive, it can prevent the MDS from properly staying within its cache size, and the MDS might eventually run out of memory and terminate unexpectedly. This message appears if a client has taken more time to comply than the time specified by the mds_recall_state_timeout option (default is 60 seconds). See Metadata Server cache size limits section for details. "Client name failing to advance its oldest client/flush tid" Code: MDS_HEALTH_CLIENT_OLDEST_TID, MDS_HEALTH_CLIENT_OLDEST_TID_MANY The CephFS protocol for communicating between clients and MDS servers uses a field called oldest tid to inform the MDS of which client requests are fully complete so that the MDS can forget about them. If an unresponsive client is failing to advance this field, the MDS might be prevented from properly cleaning up resources used by client requests. This message appears if a client has more requests than the number specified by the max_completed_requests option (default is 100000) that are complete on the MDS side but have not yet been accounted for in the client's oldest tid value. "Metadata damage detected" Code: MDS_HEALTH_DAMAGE Corrupt or missing metadata was encountered when reading from the metadata pool. This message indicates that the damage was sufficiently isolated for the MDS to continue operating, although client accesses to the damaged subtree return I/O errors. Use the damage ls administration socket command to view details on the damage. This message appears as soon as any damage is encountered. "MDS in read-only mode" Code: MDS_HEALTH_READ_ONLY The MDS has entered into read-only mode and will return the EROFS error codes to client operations that attempt to modify any metadata. The MDS enters into read-only mode: If it encounters a write error while writing to the metadata pool. If the administrator forces the MDS to enter into read-only mode by using the force_readonly administration socket command. "<N> slow requests are blocked" Code: MDS_HEALTH_SLOW_REQUEST One or more client requests have not been completed promptly, indicating that the MDS is either running very slowly, or encountering a bug. Use the ops administration socket command to list outstanding metadata operations. This message appears if any client requests have taken more time than the value specified by the mds_op_complaint_time option (default is 30 seconds). "Too many inodes in cache" Code: MDS_HEALTH_CACHE_OVERSIZED The MDS has failed to trim its cache to comply with the limit set by the administrator. If the MDS cache becomes too large, the daemon might exhaust available memory and terminate unexpectedly. By default, this message appears if the MDS cache size is 50% greater than its limit. Additional Resources See the Metadata Server cache size limits section in the Red Hat Ceph Storage File System Guide for details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/health-messages-for-the-ceph-file-system_fs
Overview of Containers in Red Hat Systems
Overview of Containers in Red Hat Systems Red Hat Enterprise Linux Atomic Host 7 Overview of Containers in Red Hat Systems Red Hat Atomic Host Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/overview_of_containers_in_red_hat_systems/index
Chapter 12. Deploying IPv6 on Red Hat Quay on OpenShift Container Platform
Chapter 12. Deploying IPv6 on Red Hat Quay on OpenShift Container Platform Note Currently, deploying IPv6 on the Red Hat Quay on OpenShift Container Platform is not supported on IBM Power and IBM Z. Your Red Hat Quay on OpenShift Container Platform deployment can now be served in locations that only support IPv6, such as Telco and Edge environments. For a list of known limitations, see IPv6 limitations 12.1. Enabling the IPv6 protocol family Use the following procedure to enable IPv6 support on your Red Hat Quay deployment. Prerequisites You have updated Red Hat Quay to at least version 3.8. Your host and container software platform (Docker, Podman) must be configured to support IPv6. Procedure In your deployment's config.yaml file, add the FEATURE_LISTEN_IP_VERSION parameter and set it to IPv6 , for example: # ... FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false # ... Start, or restart, your Red Hat Quay deployment. Check that your deployment is listening to IPv6 by entering the following command: USD curl <quay_endpoint>/health/instance {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} After enabling IPv6 in your deployment's config.yaml , all Red Hat Quay features can be used as normal, so long as your environment is configured to use IPv6 and is not hindered by the IPv6 and dual-stack limitations . Warning If your environment is configured to IPv4, but the FEATURE_LISTEN_IP_VERSION configuration field is set to IPv6 , Red Hat Quay will fail to deploy. 12.2. IPv6 limitations Currently, attempting to configure your Red Hat Quay deployment with the common Microsoft Azure Blob Storage configuration will not work on IPv6 single stack environments. Because the endpoint of Microsoft Azure Blob Storage does not support IPv6, there is no workaround in place for this issue. For more information, see PROJQUAY-4433 . Currently, attempting to configure your Red Hat Quay deployment with Amazon S3 CloudFront will not work on IPv6 single stack environments. Because the endpoint of Amazon S3 CloudFront does not support IPv6, there is no workaround in place for this issue. For more information, see PROJQUAY-4470 .
[ "FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false", "curl <quay_endpoint>/health/instance {\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_operator_features/operator-ipv6-dual-stack
3.2. Guest Security Recommended Practices
3.2. Guest Security Recommended Practices All of the recommended practices for securing a Red Hat Enterprise Linux system documented in the Red Hat Enterprise Linux Security Guide apply to conventional, non-virtualized systems as well as systems installed as a virtualized guest. However, there are a few security practices which are of critical importance when running guests in a virtualized environment: With all management of the guest likely taking place remotely, ensure that the management of the system takes place only over secured network channels. Tools such as SSH and network protocols such as TLS or SSL provide both authentication and data encryption to ensure that only approved administrators can manage the system remotely. Some virtualization technologies use special guest agents or drivers to enable some virtualization specific features. Ensure that these agents and applications are secured using the standard Red Hat Enterprise Linux security features, such as SELinux. In virtualized environments, a greater risk exists of sensitive data being accessed outside the protection boundaries of the guest system. Protect stored sensitive data using encryption tools such as dm-crypt and GnuPG ; although special care needs to be taken to ensure the confidentiality of the encryption keys. Note Using page deduplication technology such as Kernel Same-page Merging (KSM) may introduce side channels that could potentially be used to leak information across guests. In situations where this is a concern, KSM can be disabled either globally or on a per-guest basis. For more information about KSM, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-guest_security-guest_security_recommended_practices
Chapter 18. Running Skopeo, Buildah, and Podman in a container
Chapter 18. Running Skopeo, Buildah, and Podman in a container You can run Skopeo, Buildah, and Podman in a container. With Skopeo, you can inspect images on a remote registry without having to download the entire image with all its layers. You can also use Skopeo for copying images, signing images, syncing images, and converting images across different formats and layer compressions. Buildah facilitates building OCI container images. With Buildah, you can create a working container, either from scratch or using an image as a starting point. You can create an image either from a working container or using the instructions in a Containerfile . You can mount and unmount a working container's root filesystem. With Podman, you can manage containers and images, volumes mounted into those containers, and pods made from groups of containers. Podman is based on a libpod library for container lifecycle management. The libpod library provides APIs for managing containers, pods, container images, and volumes. Reasons to run Buildah, Skopeo, and Podman in a container: CI/CD system : Podman and Skopeo : You can run a CI/CD system inside of Kubernetes or use OpenShift to build your container images, and possibly distribute those images across different container registries. To integrate Skopeo into a Kubernetes workflow, you must run it in a container. Buildah : You want to build OCI/container images within a Kubernetes or OpenShift CI/CD systems that are constantly building images. Previously, a Docker socket was used for connecting to the container engine and performing a docker build command. This was the equivalent of giving root access to the system without requiring a password which is not secure. For this reason, use Buildah in a container instead. Different versions : All : You are running an older operating system on the host but you want to run the latest version of Skopeo, Buildah, or Podman. The solution is to run the container tools in a container. For example, this is useful for running the latest version of the container tools provided in Red Hat Enterprise Linux 8 on a Red Hat Enterprise Linux 7 container host which does not have access to the newest versions natively. HPC environment : All : A common restriction in HPC environments is that non-root users are not allowed to install packages on the host. When you run Skopeo, Buildah, or Podman in a container, you can perform these specific tasks as a non-root user. 18.1. Running Skopeo in a container You can inspect a remote container image using Skopeo. Running Skopeo in a container means that the container root filesystem is isolated from the host root filesystem. To share or copy files between the host and container, you have to mount files and directories. Prerequisites The container-tools meta-package is installed. Procedure Log in to the registry.redhat.io registry: Get the registry.redhat.io/rhel9/skopeo container image: Inspect a remote container image registry.access.redhat.com/ubi9/ubi using Skopeo: The --rm option removes the registry.redhat.io/rhel9/skopeo image after the container exits. Additional resources How to run skopeo in a container 18.2. Running Skopeo in a container using credentials Working with container registries requires an authentication to access and alter data. Skopeo supports various ways to specify credentials. With this approach you can specify credentials on the command line using the --cred USERNAME[:PASSWORD] option. Prerequisites The container-tools meta-package is installed. Procedure Inspect a remote container image using Skopeo against a locked registry: Additional resources How to run skopeo in a container 18.3. Running Skopeo in a container using authfiles You can use an authentication file (authfile) to specify credentials. The skopeo login command logs into the specific registry and stores the authentication token in the authfile. The advantage of using authfiles is preventing the need to repeatedly enter credentials. When running on the same host, all container tools such as Skopeo, Buildah, and Podman share the same authfile. When running Skopeo in a container, you have to either share the authfile on the host by volume-mounting the authfile in the container, or you have to reauthenticate within the container. Prerequisites The container-tools meta-package is installed. Procedure Inspect a remote container image using Skopeo against a locked registry: The -v USDAUTHFILE:/auth.json option volume-mounts an authfile at /auth.json within the container. Skopeo can now access the authentication tokens in the authfile on the host and get secure access to the registry. Other Skopeo commands work similarly, for example: Use the skopeo-copy command to specify credentials on the command line for the source and destination image using the --source-creds and --dest-creds options. It also reads the /auth.json authfile. If you want to specify separate authfiles for the source and destination image, use the --source-authfile and --dest-authfile options and volume-mount those authfiles from the host into the container. Additional resources How to run skopeo in a container 18.4. Copying container images to or from the host Skopeo, Buildah, and Podman share the same local container-image storage. If you want to copy containers to or from the host container storage, you need to mount it into the Skopeo container. Note The path to the host container storage differs between root ( /var/lib/containers/storage ) and non-root users ( USDHOME/.local/share/containers/storage ). Prerequisites The container-tools meta-package is installed. Procedure Copy the registry.access.redhat.com/ubi9/ubi image into your local container storage: The --privileged option disables all security mechanisms. Red Hat recommends only using this option in trusted environments. To avoid disabling security mechanisms, export the images to a tarball or any other path-based image transport and mount them in the Skopeo container: USD podman save --format oci-archive -o oci.tar USDIMAGE USD podman run --rm -v oci.tar:/oci.tar registry.redhat.io/rhel9/skopeo copy oci-archive:/oci.tar USDDESTINATION Optional: List images in local storage: Additional resources How to run skopeo in a container 18.5. Running Buildah in a container The procedure demonstrates how to run Buildah in a container and create a working container based on an image. Prerequisites The container-tools meta-package is installed. Procedure Log in to the registry.redhat.io registry: Pull and run the registry.redhat.io/rhel9/buildah image: The --rm option removes the registry.redhat.io/rhel9/buildah image after the container exits. The --device option adds a host device to the container. The sys_chroot - capability to change to a different root directory. It is not included in the default capabilities of a container. Create a new container using a registry.access.redhat.com/ubi9 image: Run the ls / command inside the ubi9-working-container container: Optional: List all images in a local storage: Optional: List the working containers and their base images: Optional: Push the registry.access.redhat.com/ubi9 image to the a local registry located on registry.example.com : Additional resources Best practices for running Buildah in a container 18.6. Privileged and unprivileged Podman containers By default, Podman containers are unprivileged and cannot, for example, modify parts of the operating system on the host. This is because by default a container is only allowed limited access to devices. The following list emphasizes important properties of privileged containers. You can run the privileged container using the podman run --privileged <image_name> command. A privileged container is given the same access to devices as the user launching the container. A privileged container disables the security features that isolate the container from the host. Dropped Capabilities, limited devices, read-only mount points, Apparmor/SELinux separation, and Seccomp filters are all disabled. A privileged container cannot have more privileges than the account that launched them. Additional resources How to use the --privileged flag with container engines podman-run man page on your system 18.7. Running Podman with extended privileges If you cannot run your workloads in a rootless environment, you need to run these workloads as a root user. Running a container with extended privileges should be done judiciously, because it disables all security features. Prerequisites The container-tools meta-package is installed. Procedure Run the Podman container in the Podman container: Run the outer container named privileged_podman based on the registry.access.redhat.com/ubi9/podman image. The --privileged option disables the security features that isolate the container from the host. Run podman run ubi9 echo hello command to create the inner container based on the ubi9 image. Notice that the ubi9 short image name was resolved as an alias. As a result, the registry.access.redhat.com/ubi9:latest image is pulled. Verification List all containers: Additional resources How to use Podman inside of a container podman-run man page on your system 18.8. Running Podman with less privileges You can run two nested Podman containers without the --privileged option. Running the container without the --privileged option is a more secure option. This can be useful when you want to try out different versions of Podman in the most secure way possible. Prerequisites The container-tools meta-package is installed. Procedure Run two nested containers: Run the outer container named unprivileged_podman based on the registry.access.redhat.com/ubi9/podman image. The --security-opt label=disable option disables SELinux separation on the host Podman. SELinux does not allow containerized processes to mount all of the file systems required to run inside a container. The --user podman option automatically causes the Podman inside the outer container to run within the user namespace. The --device /dev/fuse option uses the fuse-overlayfs package inside the container. This option adds /dev/fuse to the outer container, so that Podman inside the container can use it. Run podman run ubi9 echo hello command to create the inner container based on the ubi9 image. Notice that the ubi9 short image name was resolved as an alias. As a result, the registry.access.redhat.com/ubi9:latest image is pulled. Verification List all containers: 18.9. Building a container inside a Podman container You can run a container in a container using Podman. This example shows how to use Podman to build and run another container from within this container. The container will run "Moon-buggy", a simple text-based game. Prerequisites The container-tools meta-package is installed. You are logged in to the registry.redhat.io registry: Procedure Run the container based on registry.redhat.io/rhel9/podman image: Run the outer container named podman_container based on the registry.redhat.io/rhel9/podman image. The --it option specifies that you want to run an interactive bash shell within a container. The --privileged option disables the security features that isolate the container from the host. Create a Containerfile inside the podman_container container: The commands in the Containerfile cause the following build command to: Build a container from the registry.access.redhat.com/ubi9/ubi image. Install the epel-release-latest-8.noarch.rpm package. Install the moon-buggy package. Set the container command. Build a new container image named moon-buggy using the Containerfile : Optional: List all images: Run a new container based on a moon-buggy container: Optional: Tag the moon-buggy image: Optional: Push the moon-buggy image to the registry: Additional resources Technology preview: Running a container inside a container
[ "podman login registry.redhat.io Username: [email protected] Password: <password> Login Succeeded!", "podman pull registry.redhat.io/rhel9/skopeo", "podman run --rm registry.redhat.io/rhel9/skopeo skopeo inspect docker://registry.access.redhat.com/ubi9/ubi { \"Name\": \"registry.access.redhat.com/ubi9/ubi\", \"Labels\": { \"architecture\": \"x86_64\", \"name\": \"ubi9\", \"summary\": \"Provides the latest release of Red Hat Universal Base Image 9.\", \"url\": \"https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/8.2-347\", }, \"Architecture\": \"amd64\", \"Os\": \"linux\", \"Layers\": [ ], \"Env\": [ \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"container=oci\" ] }", "podman run --rm registry.redhat.io/rhel9/skopeo inspect --creds USDUSER:USDPASSWORD docker://USDIMAGE", "podman run --rm -v USDAUTHFILE:/auth.json registry.redhat.io/rhel9/skopeo inspect docker://USDIMAGE", "podman run --privileged --rm -v USDHOME/.local/share/containers/storage:/var/lib/containers/storage registry.redhat.io/rhel9/skopeo skopeo copy docker://registry.access.redhat.com/ubi9/ubi containers-storage:registry.access.redhat.com/ubi9/ubi", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi9/ubi latest ecbc6f53bba0 8 weeks ago 211 MB", "podman login registry.redhat.io Username: [email protected] Password: <password> Login Succeeded!", "podman run --rm --device /dev/fuse -it registry.redhat.io/rhel9/buildah /bin/bash", "buildah from registry.access.redhat.com/ubi9 ubi9-working-container", "buildah run --isolation=chroot ubi9-working-container ls / bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv", "buildah images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi9 latest ecbc6f53bba0 5 weeks ago 211 MB", "buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 0aaba7192762 * ecbc6f53bba0 registry.access.redhat.com/ub... ubi9-working-container", "buildah push ecbc6f53bba0 registry.example.com:5000/ubi9/ubi", "podman run --privileged --name=privileged_podman registry.access.redhat.com//podman podman run ubi9 echo hello Resolved \"ubi9\" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf) Trying to pull registry.access.redhat.com/ubi9:latest Storing signatures hello", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 52537876caf4 registry.access.redhat.com/ubi9/podman podman run ubi9 e... 30 seconds ago Exited (0) 13 seconds ago privileged_podman", "podman run --name=unprivileged_podman --security-opt label=disable --user podman --device /dev/fuse registry.access.redhat.com/ubi9/podman podman run ubi9 echo hello", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a47b26290f43 podman run ubi9 e... 30 seconds ago Exited (0) 13 seconds ago unprivileged_podman", "podman login registry.redhat.io", "podman run --privileged --name podman_container -it registry.redhat.io/rhel9/podman /bin/bash", "vi Containerfile FROM registry.access.redhat.com/ubi9/ubi RUN dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm RUN dnf -y install moon-buggy && dnf clean all CMD [\"/usr/bin/moon-buggy\"]", "podman build -t moon-buggy .", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/moon-buggy latest c97c58abb564 13 seconds ago 1.67 GB registry.access.redhat.com/ubi9/ubi latest 4199acc83c6a 132seconds ago 213 MB", "podman run -it --name moon moon-buggy", "podman tag moon-buggy registry.example.com/moon-buggy", "podman push registry.example.com/moon-buggy" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/assembly_running-skopeo-buildah-and-podman-in-a-container
Chapter 36. Manually Upgrading the Kernel
Chapter 36. Manually Upgrading the Kernel The Red Hat Enterprise Linux kernel is custom built by the Red Hat kernel team to ensure its integrity and compatibility with supported hardware. Before Red Hat releases a kernel, it must first pass a rigorous set of quality assurance tests. Red Hat Enterprise Linux kernels are packaged in RPM format so that they are easy to upgrade and verify using the Red Hat User Agent , or the up2date command. The Red Hat User Agent automatically queries the Red Hat Network servers and determines which packages need to be updated on your machine, including the kernel. This chapter is only useful for those individuals that require manual updating of kernel packages, without using the up2date command. Warning Please note, that building a custom kernel is not supported by the Red Hat Global Services Support team, and therefore is not explored in this manual. Note Use of up2date is highly recommended by Red Hat for installing upgraded kernels. For more information on Red Hat Network, the Red Hat User Agent , and up2date , refer to Chapter 16, Red Hat Network . 36.1. Overview of Kernel Packages Red Hat Enterprise Linux contains the following kernel packages (some may not apply to your architecture): kernel - Contains the kernel and the following key features: Uniprocessor support for x86 and Athlon systems (can be run on a multi-processor system, but only one processor is utilized) Multi-processor support for all other architectures For x86 systems, only the first 4 GB of RAM is used; use the kernel-hugemem package for x86 systems with over 4 GB of RAM kernel-devel - Contains the kernel headers and makefiles sufficient to build modules against the kernel package. kernel-hugemem - (only for i686 systems) In addition to the options enabled for the kernel package, the key configuration options are as follows: Support for more than 4 GB of RAM (up to 64 GB for x86) Note kernel-hugemem is required for memory configurations higher than 16 GB. PAE (Physical Address Extension) or 3 level paging on x86 processors that support PAE Support for multiple processors 4GB/4GB split - 4GB of virtual address space for the kernel and almost 4GB for each user process on x86 systems kernel-hugemem-devel - Contains the kernel headers and makefiles sufficient to build modules against the kernel-hugemem package. kernel-smp - Contains the kernel for multi-processor systems. The following are the key features: Multi-processor support Support for more than 4 GB of RAM (up to 16 GB for x86) PAE (Physical Address Extension) or 3 level paging on x86 processors that support PAE kernel-smp-devel - Contains the kernel headers and makefiles sufficient to build modules against the kernel-smp package. kernel-utils - Contains utilities that can be used to control the kernel or system hardware. kernel-doc - Contains documentation files from the kernel source. Various portions of the Linux kernel and the device drivers shipped with it are documented in these files. Installation of this package provides a reference to the options that can be passed to Linux kernel modules at load time. By default, these files are placed in the /usr/share/doc/kernel-doc- <version> / directory. Note The kernel-source package has been removed and replaced with an RPM that can only be retrieved from Red Hat Network. This *.src.rpm must then be rebuilt locally using the rpmbuild command. Refer to the latest distribution Release Notes, including all updates, at https://www.redhat.com/docs/manuals/enterprise/ for more information on obtaining and installing the kernel source package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Manually_Upgrading_the_Kernel
2.3. Multipath Device Attributes
2.3. Multipath Device Attributes In addition to the user_friendly_names and alias options, a multipath device has numerous attributes. You can modify these attributes for a specific multipath device by creating an entry for that device in the multipaths section of the multipath configuration file. For information on the multipaths section of the multipath configuration file, see Section 4.4, "Multipaths Device Configuration Attributes" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/multipath_device_attributes
9.5. Network Bonding
9.5. Network Bonding Network bonding combines multiple NICs into a bond device, with the following advantages: The transmission speed of bonded NICs is greater than that of a single NIC. Network bonding provides fault tolerance, because the bond device will not fail unless all its NICs fail. Using NICs of the same make and model ensures that they support the same bonding options and modes. Important Red Hat Virtualization's default bonding mode, (Mode 4) Dynamic Link Aggregation , requires a switch that supports 802.3ad. The logical networks of a bond must be compatible. A bond can support only 1 non-VLAN logical network. The rest of the logical networks must have unique VLAN IDs. Bonding must be enabled for the switch ports. Consult the manual provided by your switch vendor for specific instructions. You can create a network bond device using one of the following methods: Manually, in the Administration Portal , for a specific host Automatically, using LLDP Labeler , for unbonded NICs of all hosts in a cluster or data center If your environment uses iSCSI storage and you want to implement redundancy, follow the instructions for configuring iSCSI multipathing . 9.5.1. Creating a Bond Device in the Administration Portal You can create a bond device on a specific host in the Administration Portal. The bond device can carry both VLAN-tagged and untagged traffic. Procedure Click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab to list the physical network interfaces attached to the host. Click Setup Host Networks . Check the switch configuration. If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, hover your cursor over a physical NIC to view the switch port's aggregation configuration. Drag and drop a NIC onto another NIC or onto a bond. Note Two NICs form a new bond. A NIC and a bond adds the NIC to the existing bond. If the logical networks are incompatible , the bonding operation is blocked. Select the Bond Name and Bonding Mode from the drop-down menus. See Section 9.5.3, "Bonding Modes" for details. If you select the Custom bonding mode, you can enter bonding options in the text field, as in the following examples: If your environment does not report link states with ethtool , you can set ARP monitoring by entering mode= 1 arp_interval= 1 arp_ip_target= 192.168.0.2 . You can designate a NIC with higher throughput as the primary interface by entering mode= 1 primary= eth0 . For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org. Click OK . Attach a logical network to the new bond and configure it. See Section 9.4.2, "Editing Host Network Interfaces and Assigning Logical Networks to Hosts" for instructions. Note You cannot attach a logical network directly to an individual NIC in the bond. Optionally, you can select Verify connectivity between Host and Engine if the host is in maintenance mode. Click OK . 9.5.2. Creating a Bond Device with the LLDP Labeler Service The LLDP Labeler service enables you to create a bond device automatically with all unbonded NICs, for all the hosts in one or more clusters or in the entire data center. The bonding mode is (Mode 4) Dynamic Link Aggregation(802.3ad) . NICs with incompatible logical networks cannot be bonded. By default, LLDP Labeler runs as an hourly service. This option is useful if you make hardware changes (for example, NICs, switches, or cables) or change switch configurations. Prerequisites The interfaces must be connected to a Juniper switch. The Juniper switch must be configured for Link Aggregation Control Protocol (LACP) using LLDP. Procedure Configure the username and password in /etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf : username - the username of the Manager administrator. The default is admin@internal . password - the password of the Manager administrator. The default is 123456 . Configure the LLDP Labeler service by updating the following values in etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf : clusters - a comma-separated list of clusters on which the service should run. Wildcards are supported. For example, Cluster* defines LLDP Labeler to run on all clusters starting with word Cluster . To run the service on all clusters in the data center, type * . The default is Def* . api_url - the full URL of the Manager's API. The default is https:// Manager_FQDN /ovirt-engine/api ca_file - the path to the custom CA certificate file. Leave this value empty if you do not use custom certificates. The default is empty. auto_bonding - enables LLDP Labeler's bonding capabilities. The default is true . auto_labeling - enables LLDP Labeler's labeling capabilities. The default is true . Optionally, you can configure the service to run at a different time interval by changing the value of OnUnitActiveSec in etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer . The default is 1h . Configure the service to start now and at boot by entering the following command: To invoke the service manually, enter the following command: Attach a logical network to the new bond and configure it. See Section 9.4.2, "Editing Host Network Interfaces and Assigning Logical Networks to Hosts" for instructions. Note You cannot attach a logical network directly to an individual NIC in the bond. 9.5.3. Bonding Modes The packet dispersal algorithm is determined by the bonding mode. (See the Linux Ethernet Bonding Driver HOWTO for details). Red Hat Virtualization's default bonding mode is (Mode 4) Dynamic Link Aggregation(802.3ad) . Red Hat Virtualization supports the following bonding modes, because they can be used in virtual machine (bridged) networks: (Mode 1) Active-Backup One NIC is active. If the active NIC fails, one of the backup NICs replaces it as the only active NIC in the bond. The MAC address of this bond is visible only on the network adapter port. This prevents MAC address confusion that might occur if the MAC address of the bond were to change, reflecting the MAC address of the new active NIC. (Mode 2) Load Balance (balance-xor) The NIC that transmits packets is selected by performing an XOR operation on the source MAC address and the destination MAC address, multiplied by the modulo of the total number of NICs. This algorithm ensures that the same NIC is selected for each destination MAC address. (Mode 3) Broadcast Packets are transmitted to all NICs. (Mode 4) Dynamic Link Aggregation(802.3ad) (Default) The NICs are aggregated into groups that share the same speed and duplex settings . All the NICs in the active aggregation group are used. Note (Mode 4) Dynamic Link Aggregation(802.3ad) requires a switch that supports 802.3ad. The bonded NICs must have the same aggregator IDs. Otherwise, the Manager displays a warning exclamation mark icon on the bond in the Network Interfaces tab and the ad_partner_mac value of the bond is reported as 00:00:00:00:00:00 . You can check the aggregator IDs by entering the following command: See https://access.redhat.com/solutions/67546 . Red Hat Virtualization does not support the following bonding modes, because they cannot be used in bridged networks and are, therefore, incompatible with virtual machine logical networks: (Mode 0) Round-Robin The NICs transmit packets in sequential order. Packets are transmitted in a loop that begins with the first available NIC in the bond and ends with the last available NIC in the bond. Subsequent loops start with the first available NIC. (Mode 5) Balance-TLB , also called Transmit Load-Balance Outgoing traffic is distributed, based on the load, over all the NICs in the bond. Incoming traffic is received by the active NIC. If the NIC receiving incoming traffic fails, another NIC is assigned. (Mode 6) Balance-ALB , also called Adaptive Load-Balance (Mode 5) Balance-TLB is combined with receive load-balancing for IPv4 traffic. ARP negotiation is used for balancing the receive load.
[ "systemctl enable --now ovirt-lldp-labeler", "/usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py", "cat /proc/net/bonding/ bond0" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-network_bonding
Chapter 1. barbican
Chapter 1. barbican The following chapter contains information about the configuration options in the barbican service. 1.1. barbican.conf This section contains options for the /etc/barbican/barbican.conf file. 1.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/barbican/barbican.conf file. . Configuration option = Default value Type Description admin_role = admin string value Role used to identify an authenticated user as administrator. allow_anonymous_access = False boolean value Allow unauthenticated users to access the API with read-only privileges. This only applies when using ContextMiddleware. api_paste_config = api-paste.ini string value File name for the paste.deploy config for api service backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. client_socket_timeout = 900 integer value Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of 0 means wait forever. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. db_auto_create = False boolean value Create the Barbican database on service startup. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_limit_paging = 10 integer value Default page size for the limit paging URL parameter. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. host_href = http://localhost:9311 string value Host name, for use in HATEOAS-style references Note: Typically this would be the load balanced endpoint that clients would use to communicate back with this service. If a deployment wants to derive host from wsgi request instead then make this blank. Blank is needed to override default config value which is http://localhost:9311 `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_allowed_request_size_in_bytes = 25000 integer value Maximum allowed http request size against the barbican-api. max_allowed_secret_in_bytes = 20000 integer value Maximum allowed secret size in bytes. max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs). max_limit_paging = 100 integer value Maximum page size for the limit paging URL parameter. max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? sql_connection = sqlite:///barbican.sqlite string value SQLAlchemy connection string for the reference implementation registry server. Any valid SQLAlchemy connection string is fine. See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine . Note: For absolute addresses, use //// slashes after sqlite: . sql_idle_timeout = 3600 integer value Period in seconds after which SQLAlchemy should reestablish its connection to the database. MySQL uses a default wait_timeout of 8 hours, after which it will drop idle connections. This can result in MySQL Gone Away exceptions. If you notice this, you can lower this value to ensure that SQLAlchemy reconnects before MySQL can drop the connection. sql_max_retries = 60 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. sql_pool_class = QueuePool string value Accepts a class imported from the sqlalchemy.pool module, and handles the details of building the pool for you. If commented out, SQLAlchemy will select based on the database dialect. Other options are QueuePool (for SQLAlchemy-managed connections) and NullPool (to disabled SQLAlchemy management of connections). See http://docs.sqlalchemy.org/en/latest/core/pooling.html for more details sql_pool_logging = False boolean value Show SQLAlchemy pool-related debugging output in logs (sets DEBUG log level output) if specified. sql_pool_max_overflow = 10 integer value The maximum overflow size of the pool used by SQLAlchemy. When the number of checked-out connections reaches the size set in sql_pool_size, additional connections will be returned up to this limit. It follows then that the total number of simultaneous connections the pool will allow is sql_pool_size + sql_pool_max_overflow. Can be set to -1 to indicate no overflow limit, so no limit will be placed on the total number of concurrent connections. Comment out to allow SQLAlchemy to select the default. sql_pool_size = 5 integer value Size of pool used by SQLAlchemy. This is the largest number of connections that will be kept persistently in the pool. Can be set to 0 to indicate no size limit. To disable pooling, use a NullPool with sql_pool_class instead. Comment out to allow SQLAlchemy to select the default. sql_retry_interval = 1 integer value Interval between retries of opening a SQL connection. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tcp_keepidle = 600 integer value Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. wsgi_default_pool_size = 100 integer value Size of the pool of greenthreads used by wsgi wsgi_keep_alive = True boolean value If False, closes the client socket connection explicitly. wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f string value A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. wsgi_server_debug = False boolean value True if the server should send exception tracebacks to the clients on 500 errors. If False, the server will respond with empty bodies. 1.1.2. certificate The following table outlines the options available under the [certificate] group in the /etc/barbican/barbican.conf file. Table 1.1. certificate Configuration option = Default value Type Description enabled_certificate_plugins = ['simple_certificate'] multi valued List of certificate plugins to load. namespace = barbican.certificate.plugin string value Extension namespace to search for plugins. 1.1.3. certificate_event The following table outlines the options available under the [certificate_event] group in the /etc/barbican/barbican.conf file. Table 1.2. certificate_event Configuration option = Default value Type Description enabled_certificate_event_plugins = ['simple_certificate_event'] multi valued List of certificate plugins to load. namespace = barbican.certificate.event.plugin string value Extension namespace to search for eventing plugins. 1.1.4. cors The following table outlines the options available under the [cors] group in the /etc/barbican/barbican.conf file. Table 1.3. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Project-Id', 'X-Identity-Status', 'X-User-Id', 'X-Storage-Token', 'X-Domain-Id', 'X-User-Domain-Id', 'X-Project-Domain-Id', 'X-Roles'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Project-Id', 'X-Identity-Status', 'X-User-Id', 'X-Storage-Token', 'X-Domain-Id', 'X-User-Domain-Id', 'X-Project-Domain-Id', 'X-Roles'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 1.1.5. crypto The following table outlines the options available under the [crypto] group in the /etc/barbican/barbican.conf file. Table 1.4. crypto Configuration option = Default value Type Description enabled_crypto_plugins = ['simple_crypto'] multi valued List of crypto plugins to load. namespace = barbican.crypto.plugin string value Extension namespace to search for plugins. 1.1.6. dogtag_plugin The following table outlines the options available under the [dogtag_plugin] group in the /etc/barbican/barbican.conf file. Table 1.5. dogtag_plugin Configuration option = Default value Type Description auto_approved_profiles = caServerCert string value List of automatically approved enrollment profiles ca_expiration_time = 1 integer value Time in days for CA entries to expire dogtag_host = localhost string value Hostname for the Dogtag instance dogtag_port = 8443 port value Port for the Dogtag instance nss_db_path = /etc/barbican/alias string value Path to the NSS certificate database nss_password = None string value Password for the NSS certificate databases pem_path = /etc/barbican/kra_admin_cert.pem string value Path to PEM file for authentication plugin_name = Dogtag KRA string value User friendly plugin name plugin_working_dir = /etc/barbican/dogtag string value Working directory for Dogtag plugin retries = 3 integer value Retries when storing or generating secrets simple_cmc_profile = caOtherCert string value Profile for simple CMC requests 1.1.7. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/barbican/barbican.conf file. Table 1.6. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 1.1.8. keystone_notifications The following table outlines the options available under the [keystone_notifications] group in the /etc/barbican/barbican.conf file. Table 1.7. keystone_notifications Configuration option = Default value Type Description allow_requeue = False boolean value True enables requeue feature in case of notification processing error. Enable this only when underlying transport supports this feature. control_exchange = keystone string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. enable = False boolean value True enables keystone notification listener functionality. pool_name = None string value Pool name for notifications listener. Setting this to a distinctive value will allow barbican notifications listener to receive its own copy of all messages from the topic without without interfering with other services listening on the same topic. This feature is supported only by some oslo.messaging backends (in particilar by rabbitmq) and for those it is preferrable to use it instead of separate notification topic for barbican. thread_pool_size = 10 integer value Define the number of max threads to be used for notification server processing functionality. topic = notifications string value Keystone notification queue topic name. This name needs to match one of values mentioned in Keystone deployment's notification_topics configuration e.g. notification_topics=notifications, barbican_notificationsMultiple servers may listen on a topic and messages will be dispatched to one of the servers in a round-robin fashion. That's why Barbican service should have its own dedicated notification queue so that it receives all of Keystone notifications. Alternatively if the chosen oslo.messaging backend supports listener pooling (for example rabbitmq), setting a non-default pool_name option should be preferred. version = 1.0 string value Version of tasks invoked via notifications 1.1.9. kmip_plugin The following table outlines the options available under the [kmip_plugin] group in the /etc/barbican/barbican.conf file. Table 1.8. kmip_plugin Configuration option = Default value Type Description ca_certs = None string value File path to concatenated "certification authority" certificates certfile = None string value File path to local client certificate host = localhost string value Address of the KMIP server keyfile = None string value File path to local client certificate keyfile password = None string value Password for authenticating with KMIP server pkcs1_only = False boolean value Only support PKCS#1 encoding of asymmetric keys plugin_name = KMIP HSM string value User friendly plugin name port = 5696 port value Port for the KMIP server ssl_version = PROTOCOL_TLSv1_2 string value SSL version, maps to the module ssl's constants username = None string value Username for authenticating with KMIP server 1.1.10. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/barbican/barbican.conf file. Table 1.9. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 1.1.11. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/barbican/barbican.conf file. Table 1.10. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 1.1.12. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/barbican/barbican.conf file. Table 1.11. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 1.1.13. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/barbican/barbican.conf file. Table 1.12. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = True boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 1.1.14. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/barbican/barbican.conf file. Table 1.13. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 1.1.15. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/barbican/barbican.conf file. Table 1.14. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 1.1.16. p11_crypto_plugin The following table outlines the options available under the [p11_crypto_plugin] group in the /etc/barbican/barbican.conf file. Table 1.15. p11_crypto_plugin Configuration option = Default value Type Description aes_gcm_generate_iv = True boolean value Generate IVs for CKM_AES_GCM mechanism. always_set_cka_sensitive = True boolean value Always set CKA_SENSITIVE=CK_TRUE including CKA_EXTRACTABLE=CK_TRUE keys. encryption_mechanism = CKM_AES_CBC string value Secret encryption mechanism hmac_key_type = CKK_AES string value HMAC Key Type hmac_keygen_mechanism = CKM_AES_KEY_GEN string value HMAC Key Generation Algorithm hmac_keywrap_mechanism = CKM_SHA256_HMAC string value HMAC key wrap mechanism hmac_label = None string value Master HMAC Key label (as stored in the HSM) library_path = None string value Path to vendor PKCS11 library login = None string value Password to login to PKCS11 session mkek_label = None string value Master KEK label (as stored in the HSM) mkek_length = None integer value Master KEK length in bytes. os_locking_ok = False boolean value Enable CKF_OS_LOCKING_OK flag when initializing the PKCS#11 client library. pkek_cache_limit = 100 integer value Project KEK Cache Item Limit pkek_cache_ttl = 900 integer value Project KEK Cache Time To Live, in seconds pkek_length = 32 integer value Project KEK length in bytes. plugin_name = PKCS11 HSM string value User friendly plugin name rw_session = True boolean value Flag for Read/Write Sessions `seed_file = ` string value File to pull entropy for seeding RNG seed_length = 32 integer value Amount of data to read from file for seed slot_id = 1 integer value (Optional) HSM Slot ID that contains the token device to be used. token_label = None string value DEPRECATED: Use token_labels instead. Token label used to identify the token to be used. token_labels = None list value List of labels for one or more tokens to be used. Typically this is a single label, but some HSM devices may require more than one label for Load Balancing or High Availability configurations. token_serial_number = None string value Token serial number used to identify the token to be used. 1.1.17. queue The following table outlines the options available under the [queue] group in the /etc/barbican/barbican.conf file. Table 1.16. queue Configuration option = Default value Type Description asynchronous_workers = 1 integer value Number of asynchronous worker processes enable = False boolean value True enables queuing, False invokes workers synchronously namespace = barbican string value Queue namespace server_name = barbican.queue string value Server name for RPC task processing server topic = barbican.workers string value Queue topic name version = 1.1 string value Version of tasks invoked via queue 1.1.18. quotas The following table outlines the options available under the [quotas] group in the /etc/barbican/barbican.conf file. Table 1.17. quotas Configuration option = Default value Type Description quota_cas = -1 integer value Number of CAs allowed per project quota_consumers = -1 integer value Number of consumers allowed per project quota_containers = -1 integer value Number of containers allowed per project quota_orders = -1 integer value Number of orders allowed per project quota_secrets = -1 integer value Number of secrets allowed per project 1.1.19. retry_scheduler The following table outlines the options available under the [retry_scheduler] group in the /etc/barbican/barbican.conf file. Table 1.18. retry_scheduler Configuration option = Default value Type Description initial_delay_seconds = 10.0 floating point value Seconds (float) to wait before starting retry scheduler periodic_interval_max_seconds = 10.0 floating point value Seconds (float) to wait between periodic schedule events 1.1.20. secretstore The following table outlines the options available under the [secretstore] group in the /etc/barbican/barbican.conf file. Table 1.19. secretstore Configuration option = Default value Type Description enable_multiple_secret_stores = False boolean value Flag to enable multiple secret store plugin backend support. Default is False enabled_secretstore_plugins = ['store_crypto'] multi valued List of secret store plugins to load. namespace = barbican.secretstore.plugin string value Extension namespace to search for plugins. stores_lookup_suffix = None list value List of suffix to use for looking up plugins which are supported with multiple backend support. 1.1.21. simple_crypto_plugin The following table outlines the options available under the [simple_crypto_plugin] group in the /etc/barbican/barbican.conf file. Table 1.20. simple_crypto_plugin Configuration option = Default value Type Description kek = dGhpcnR5X3R3b19ieXRlX2tleWJsYWhibGFoYmxhaGg= string value Key encryption key to be used by Simple Crypto Plugin plugin_name = Software Only Crypto string value User friendly plugin name 1.1.22. snakeoil_ca_plugin The following table outlines the options available under the [snakeoil_ca_plugin] group in the /etc/barbican/barbican.conf file. Table 1.21. snakeoil_ca_plugin Configuration option = Default value Type Description ca_cert_chain_path = None string value Path to CA certificate chain file ca_cert_key_path = None string value Path to CA certificate key file ca_cert_path = None string value Path to CA certificate file ca_cert_pkcs7_path = None string value Path to CA chain pkcs7 file subca_cert_key_directory = /etc/barbican/snakeoil-cas string value Directory in which to store certs/keys for subcas 1.1.23. ssl The following table outlines the options available under the [ssl] group in the /etc/barbican/barbican.conf file. Table 1.22. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuration_reference/barbican
Chapter 1. Preparing to install on Azure Stack Hub
Chapter 1. Preparing to install on Azure Stack Hub 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have installed Azure Stack Hub version 2008 or later. 1.2. Requirements for installing OpenShift Container Platform on Azure Stack Hub Before installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must configure an Azure account. See Configuring an Azure Stack Hub account for details about account configuration, account limits, DNS zone configuration, required roles, and creating service principals. 1.3. Choosing a method to install OpenShift Container Platform on Azure Stack Hub You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure : You can install OpenShift Container Platform on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that you provision, by using the following method: Installing a cluster on Azure Stack Hub using ARM templates : You can install OpenShift Container Platform on Azure Stack Hub by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 1.4. steps Configuring an Azure Stack Hub account
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_azure_stack_hub/preparing-to-install-on-azure-stack-hub
Chapter 3. Setting up the metrics monitoring solution
Chapter 3. Setting up the metrics monitoring solution Install PCP packages and configure PCP data collection. You can use the PCP CLI tools to retrieve metrics in the command line. Optionally, you can install Grafana to enable web UI access to metrics. 3.1. Installing PCP Install the PCP packages on your Satellite Server and enable PCP daemons. Prerequisites Ensure you have the minimum of 20 GB space available in the /var/log/pcp directory. With the default PCP data retention settings, data storage is estimated to use between 100 MB and 500 MB of disk space per day, but may use up to several gigabytes over time. For more information, see Chapter 4, Metrics data retention . Procedure Install the PCP packages: Enable and start the Performance Metrics Collector daemon and Performance Metrics Logger daemon: 3.2. Configuring PCP data collection You can configure PCP to collect metrics about processes, Satellite, Apache HTTP Server, and PostgreSQL. Procedure Symlink the Satellite specific configuration to PMDA process monitoring: By default, PCP only collects basic system metrics. This step enables detailed metrics about the following Satellite processes: Java PostgreSQL Redis Dynflow Puma Pulpcore Install the process monitoring PMDA: Configure PCP to collect metrics from Apache HTTP Server. Enable the Apache HTTP Server extended status module: Enable the Apache HTTP Server PMDA: Configure PCP to collect metrics from PostgreSQL: Enable the telemetry feature in Satellite: Configure PCP to collect data from Satellite: Restart PCP to begin data collection: 3.3. Verifying PCP configuration You can verify that PCP is configured correctly and services are active. Procedure Print a summary of the active PCP configuration: Example output of the pcp command: In this example, both the Performance Metrics Collector Daemon (pmcd) and Performance Metrics Proxy Daemon (pmproxy) services are running. It also confirms the PMDA that are collecting metrics. Finally, it lists the active log file, in which pmlogger is currently storing metrics. 3.4. Enabling web UI access to metrics You can enable web UI access to metrics collected by PCP by installing Grafana. Procedure Install Grafana and the Grafana PCP plugin on your Satellite Server: Start and enable the Grafana web service and the PCP proxy service: Open the firewall port to allow access to the Grafana web interface: Reload the firewall configuration to apply the changes: Install PCP Redis and configure Grafana to load it. For more information, see Configuring PCP Redis in Red Hat Enterprise Linux 8 Monitoring and managing system status and performance . Access the Grafana web UI, enable the PCP plugin, and add PCP Redis as a data source. For more information, see Accessing the Grafana web UI in Red Hat Enterprise Linux 8 Monitoring and managing system status and performance .
[ "satellite-maintain packages install pcp pcp-pmda-apache pcp-pmda-openmetrics pcp-pmda-postgresql pcp-pmda-redis pcp-system-tools foreman-pcp", "systemctl enable --now pmcd pmlogger", "ln -s /etc/pcp/proc/foreman-hotproc.conf /var/lib/pcp/pmdas/proc/hotproc.conf", "cd /var/lib/pcp/pmdas/proc ./Install", "satellite-installer --enable-apache-mod-status", "cd /var/lib/pcp/pmdas/apache ./Install", "cd /var/lib/pcp/pmdas/postgresql ./Install", "satellite-installer --foreman-telemetry-prometheus-enabled true", "cd /var/lib/pcp/pmdas/openmetrics echo \"https:// satellite.example.com /metrics\" > config.d/foreman.url ./Install", "systemctl restart pmcd pmlogger pmproxy", "pcp", "Performance Co-Pilot configuration on satellite.example.com: platform: Linux satellite.example.com 4.18.0-372.32.1.el8_6.x86_64 #1 SMP Fri Oct 7 12:35:10 EDT 2022 x86_64 hardware: 16 cpus, 2 disks, 1 node, 31895MB RAM timezone: UTC services: pmcd pmproxy pmcd: Version 5.3.7-17, 13 agents, 4 clients pmda: root pmcd proc pmproxy xfs redis linux apache mmv kvm postgresql jbd2 openmetrics pmlogger: primary logger: /var/log/pcp/pmlogger/satellite.example.com/20230831.00.10 pmie: primary engine: /var/log/pcp/pmie/satellite.example.com/pmie.log", "satellite-maintain packages install grafana grafana-pcp", "systemctl enable --now pmproxy grafana-server", "firewall-cmd --permanent --add-service=grafana", "firewall-cmd --reload" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/monitoring_satellite_performance/setting-up-the-metrics-monitoring-solution_monitoring
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Make sure you are logged in to the Jira website. Provide feedback by clicking on this link . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. If you want to be notified about future updates, please make sure you are assigned as Reporter . Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/using_selinux_for_sap_hana/feedback_using-selinux
5.160. libunistring
5.160. libunistring 5.160.1. RHBA-2012:0887 - libunistring bug fix update An updated libunistring package that fixes one bug is now available for Red Hat Enterprise Linux 6. The libunistring package contains a portable C library that implements the UTF-8, UTF-16 and UTF-32 Unicode string types, together with functions for character processing (names, classifications, and properties) and functions for string processing (iteration, formatted output, width, word breaks, line breaks, normalization, case folding, and regular expressions). Bug Fix BZ# 732017 Previously, when calling the malloc() function, no check for the returned pointer was performed to find out whether memory was successfully allocated. Therefore, if a null pointer was returned, this could cause the libunistring library to misbehave in low-memory situations. This update adds the missing check to properly handle such situations. All users of libunistring are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libunistring
Chapter 85. Managing IdM service secrets: storing and retrieving secrets
Chapter 85. Managing IdM service secrets: storing and retrieving secrets This section shows how an administrator can use a service vault in Identity Management (IdM) to securely store a service secret in a centralized location. The vault used in the example is asymmetric, which means that to use it, the administrator needs to perform the following steps: Generate a private key using, for example, the openssl utility. Generate a public key based on the private key. The service secret is encrypted with the public key when an administrator archives it into the vault. Afterwards, a service instance hosted on a specific machine in the domain retrieves the secret using the private key. Only the service and the administrator are allowed to access the secret. If the secret is compromised, the administrator can replace it in the service vault and then redistribute it to those individual service instances that have not been compromised. Prerequisites The Key Recovery Authority (KRA) Certificate System component has been installed on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . This section includes these procedure Storing an IdM service secret in an asymmetric vault Retrieving a service secret for an IdM service instance Changing an IdM service vault secret when compromised Terminology used In the procedures: admin is the administrator who manages the service password. private-key-to-an-externally-signed-certificate.pem is the file containing the service secret, in this case a private key to an externally signed certificate. Do not confuse this private key with the private key used to retrieve the secret from the vault. secret_vault is the vault created for the service. HTTP/webserver.idm.example.com is the service whose secret is being archived. service-public.pem is the service public key used to encrypt the password stored in password_vault . service-private.pem is the service private key used to decrypt the password stored in secret_vault . 85.1. Storing an IdM service secret in an asymmetric vault Follow this procedure to create an asymmetric vault and use it to archive a service secret. Prerequisites You know the IdM administrator password. Procedure Log in as the administrator: Obtain the public key of the service instance. For example, using the openssl utility: Generate the service-private.pem private key. Generate the service-public.pem public key based on the private key. Create an asymmetric vault as the service instance vault, and provide the public key: The password archived into the vault will be protected with the key. Archive the service secret into the service vault: This encrypts the secret with the service instance public key. Repeat these steps for every service instance that requires the secret. Create a new asymmetric vault for each service instance. 85.2. Retrieving a service secret for an IdM service instance Follow this procedure to use a service instance to retrieve the service vault secret using a locally-stored service private key. Prerequisites You have access to the keytab of the service principal owning the service vault, for example HTTP/webserver.idm.example.com. You have created an asymmetric vault and archived a secret in the vault . You have access to the private key used to retrieve the service vault secret. Procedure Log in as the administrator: Obtain a Kerberos ticket for the service: Retrieve the service vault password: 85.3. Changing an IdM service vault secret when compromised Follow this procedure to isolate a compromised service instance by changing the service vault secret. Prerequisites You know the IdM administrator password. You have created an asymmetric vault to store the service secret. You have generated the new secret and have access to it, for example in the new-private-key-to-an-externally-signed-certificate.pem file. Procedure Archive the new secret into the service instance vault: This overwrites the current secret stored in the vault. Retrieve the new secret on non-compromised service instances only. For details, see Retrieving a service secret for an IdM service instance . 85.4. Additional resources See Using Ansible to manage IdM service vaults: storing and retrieving secrets .
[ "kinit admin", "openssl genrsa -out service-private.pem 2048 Generating RSA private key, 2048 bit long modulus .+++ ...........................................+++ e is 65537 (0x10001)", "openssl rsa -in service-private.pem -out service-public.pem -pubout writing RSA key", "ipa vault-add secret_vault --service HTTP/webserver.idm.example.com --type asymmetric --public-key-file service-public.pem ---------------------------- Added vault \"secret_vault\" ---------------------------- Vault name: secret_vault Type: asymmetric Public key: LS0tLS1C...S0tLS0tCg== Owner users: admin Vault service: HTTP/[email protected]", "ipa vault-archive secret_vault --service HTTP/webserver.idm.example.com --in private-key-to-an-externally-signed-certificate.pem ----------------------------------- Archived data into vault \"secret_vault\" -----------------------------------", "kinit admin", "kinit HTTP/webserver.idm.example.com -k -t /etc/httpd/conf/ipa.keytab", "ipa vault-retrieve secret_vault --service HTTP/webserver.idm.example.com --private-key-file service-private.pem --out secret.txt ------------------------------------ Retrieved data from vault \"secret_vault\" ------------------------------------", "ipa vault-archive secret_vault --service HTTP/webserver.idm.example.com --in new-private-key-to-an-externally-signed-certificate.pem ----------------------------------- Archived data into vault \"secret_vault\" -----------------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-idm-service-vaults-storing-and-retrieving-secrets_configuring-and-managing-idm
Chapter 18. Storage
Chapter 18. Storage NVMe driver rebased to version 4.17-rc1 The NVMe driver has been rebased to upstream version 4.17-rc1, which provides a number of bug fixes and enhancements over the version. Notable changes are as follows: added error handling improvements for Nonvolatile Memory Express (NVMe) over Remote Direct Memory Access (RDMA) added fixes for keeping connections over the RDMA transport alive Note that the driver does not support the Data Integrity Field/Data Integrity Extension (DIF/DIX) Protection Information implementation, and does not support multipathing over NVMe-over-Fabrics transport. (BZ#1515584) NVMe/FC is fully supported on Broadcom Emulex Fibre Channel Adapters The NVMe over Fibre Channel (NVMe/FC) transport type is now fully supported in Initiator mode when used with Broadcom Emulex Fibre Channel 32Gbit adapters. NVMe over Fibre Channel is an additional fabric transport type for the Nonvolatile Memory Express (NVMe) protocol, in addition to the Remote Direct Memory Access (RDMA) protocol that was previously introduced in Red Hat Enterprise Linux. To enable NVMe/FC in the lpfc driver, edit the /etc/modprobe.d/lpfc.conf file and add the following option: This feature was introduced as a Technology Preview in Red Hat Enterprise Linux 7.5. Drivers other than lpfc still remain in Technology Preview. See the Technology Previews part for more information. Additional restrictions: Multipath is not supported with NVMe/FC. See https://bugzilla.redhat.com/show_bug.cgi?id=1524966 . The kernel-alt package does not support NVMe/FC. kdump is not supported with NVMe/FC. See https://bugzilla.redhat.com/show_bug.cgi?id=1654433 . Booting from Storage Area Network (SAN) NVMe/FC is not supported. See https://bugzilla.redhat.com/show_bug.cgi?id=1654435 . Storage device fencing is not available on NVMe. See https://bugzilla.redhat.com/show_bug.cgi?id=1519009 . (BZ#1584753) DM Multipath now enables blacklisting or whitelisting paths by protocol Device Mapper Multipath (DM Multipath) now supports the protocol configuration option in the blacklist and blacklist_exceptions configuration sections. This enables you to blacklist or whitelist paths based on the protocol they use, such as scsi or nvme . For SCSI devices, you can also specify the transport: for example scsi:fcp or scsi:iscsi . (BZ# 1593459 ) New %0 wildcard added for the multipathd show paths format command to show path failures The multipathd show paths format command now supports the %0 wildcard to display path failures. Support for this wildcard makes it easier for users to track which paths have been failing in a multipath device. (BZ# 1554516 ) New all_tg_pt multipath configuration option The defaults and devices sections of the multipath.conf configuration file now support the all_tg_pts parameter, which defaults to no . If this option is set to yes , when mpathpersist registers keys it will treat a key registered from one host to one target port as going from one host to all target ports. Some arrays, notably the EMC VNX, treat reservations as between one host and all target ports. Without mpathpersist working the same way, it would give reservation conflicts. (BZ#1541116) Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) DIF/DIX is fully supported, provided that the hardware vendor has qualified it and provides full support for the particular HBA and storage array configuration on RHEL. DIF/DIX is not supported on other configurations. It is not supported for use on the boot device, and it is not supported on virtualized guests. Red Hat does not support using ASMLib when DIF/DIX is enabled. DIF/DIX is enabled/disabled at the storage device, which involves various layers up to (and including) the application. The method for activating the DIF on storage devices is device-dependent. For further information on the DIF/DIX feature, see What is DIF/DIX . (BZ#1649493)
[ "lpfc_enable_fc4_type=3" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/new_features_storage
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_device_mapper_multipath/proc_providing-feedback-on-red-hat-documentation_configuring-device-mapper-multipath
Chapter 1. Introduction
Chapter 1. Introduction This guide describes the new features and changes to behavior in AMQ 7. If you have an existing AMQ 6 environment, this guide will help you to understand the differences in AMQ 7 so that you are prepared to configure new broker instances in AMQ 7. 1.1. When to Get Assistance Before Migrating If you plan to migrate a production environment, you should seek further assistance and guidance from a Red Hat support representative. You can open a support case at https://access.redhat.com/support/ . 1.2. Supported Migration Paths You can use this guide to understand the configuration changes that might be required to create a AMQ Broker 7 configuration to which existing OpenWire JMS clients can connect. This guide does not describe how to migrate the following features: The message store This guide provides information about configuration changes that will help you to configure a new AMQ 7 broker instance. Data, such as messages stored on the AMQ 6 broker, will not be migrated. Clients (other than OpenWire JMS clients) This guide helps you to configure a AMQ 7 broker instance to which existing OpenWire JMS clients can connect. For information about creating new clients that can connect to a AMQ 7 broker, see the client guides at the Red Hat Customer Portal . 1.3. Understanding the Important New Concepts in AMQ 7 Before learning about the specific configuration changes in each AMQ feature area, you should first understand the important conceptual differences between AMQ 6 and AMQ 7. There are several key architectural differences in AMQ 7. In addition, a new message addressing and routing model has been implemented in this release. 1.3.1. Architectural Changes in AMQ 7 AMQ 7 offers key architectural changes for how incoming network connections are made to the broker, the message store, and the way in which brokers are deployed. Transport Connector Changes for Incoming Connections AMQ 6 used different types of transport connectors, such as TCP (synchronous) and Java NIO (non-blocking). In AMQ 7, you no longer have to choose which transport type to use: all incoming network connections between entities in different virtual machines use Netty connections. Netty is a high-performance, low-level network library that allows network connections to be configured to use Java IO, Java NIO, TCP sockets, SSL/TLS, HTTP, and HTTPS. Message Store and Paging Changes The process by which the broker stores messages in memory and pages them to disk is different in AMQ 7. AMQ 6 used KahaDB for a message store, which consists of both a message journal for fast, sequential message storing, and an index to retrieve messages when needed. AMQ 7 contains its own built-in message store, which consists of an append-only message journal. It does not use an index. For more information about these changes, see Message Persistence . Broker Deployment Changes In AMQ Broker 7, broker deployment differs from AMQ 6 in the following ways: Deployment mechanism AMQ 6, by default, was deployed in Apache Karaf containers. AMQ Broker 7 is not. Deploying multiple brokers In AMQ 6, to deploy multiple brokers, you either had to deploy a collection of standalone brokers (which required you to install and configure each broker separately), or deploy a fabric of AMQ brokers using JBoss Fuse Fabric. In AMQ Broker 7, deploying multiple brokers involves installing AMQ Broker 7 once, and then on the same machine, creating as many broker instances as you require. AMQ Broker 7 is not intended to be deployed using fabrics. 1.3.2. Message Address Changes in AMQ 7 AMQ 7 introduces a new addressing and routing model to configure message routing semantics for any messaging protocol (or API in the case of JMS). However, this model does require you to configure address, queue, topic, and routing functionality differently than in AMQ 6. As part of your migration planning, you should be prepared to carefully review the new addressing model and its configuration elements. AMQ Broker 7 does not distinguish between JMS and non-JMS configuration. AMQ Broker 7 implements addresses, routing mechanisms, and queues. Messages are delivered by routing messages to queues based on addresses and routing mechanisms. Two new routing mechanisms- multicast and anycast- enable AMQ Broker 7 to route messages in standard messaging patterns. Multicast routing implements a publish-subscribe pattern in which all subscribers to an address receive messages sent to the address. Alternatively, anycast routing implements a point-to-point pattern in which only a single queue is attached to an address, and consumers subscribe to that queue to receive messages in round-robin order. Related Information For more information about the new addressing model in AMQ Broker 7, see Configuring addresses and queues in Configuring AMQ Broker . For more information about how message addressing is configured in AMQ Broker 7, see Message Addresses and Queues . 1.4. Reviewing New Features and Known Issues in AMQ 7 Before migrating to AMQ 7, you should understand the key new features, enhancements, and known issues. For a list, see the Release Notes for Red Hat AMQ Broker 7.8 . 1.5. Document conventions This document uses the following conventions for the sudo command and file paths. The sudo command In this document, sudo is used for any command that requires root privileges. You should always exercise caution when using sudo , as any changes can affect the entire system. For more information about using sudo , see The sudo Command . About the use of file paths in this document In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/... ). If you are using Microsoft Windows, you should use the equivalent Microsoft Windows paths (for example, C:\Users\... ).
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/migrating_to_red_hat_amq_7/introduction
Chapter 12. Troubleshooting the API infrastructure
Chapter 12. Troubleshooting the API infrastructure This guide aims to help you identify and fix the cause of issues with your API infrastructure. API Infrastructure is a lengthy and complex topic. However, at a minimum, you will have three moving parts in your Infrastructure: The API gateway 3scale The API Errors in any of these three elements result in API consumers being unable to access your API. However, it is difficult to find the component that caused the failure. This guide gives you some tips to troubleshoot your infrastructure to identify the problem. Use the following sections to identify and fix common issues that may occur: Common integration issues Handling API infrastructure issues Identifying API request issues Section 12.4, "ActiveDocs issues" Section 12.5, "Logging in NGINX" Section 12.6, "3scale error codes" 12.1. Common integration issues There are some evidences that can point to some very common issues with your integration with 3scale. These will vary depending on whether you are at the beginning of your API project, setting up your infrastructure, or are already live in production. 12.1.1. Integration issues The following sections attempt to outline some common issues you may see in the APIcast error log during the initial phases of your integration with 3scale: at the beginning using APIcast Hosted and prior to go-live, running the self-managed APIcast. 12.1.1.1. APIcast Hosted When you are first integrating your API with APIcast Hosted on the Service Integration screen, you might get some of the following errors shown on the page or returned by the test call you make to check for a successful integration. Test request failed: execution expired Check that your API is reachable from the public internet. APIcast Hosted cannot be used with private APIs. If you do not want to make your API publicly available to integrate with APIcast Hosted, you can set up a private secret between APIcast Hosted and your API to reject any calls not coming from the API gateway. The accepted format is protocol://address(:port) Remove any paths at the end of your APIs private base URL. You can add these in the "mapping rules" pattern or at the beginning of the API test GET request . Test request failed with HTTP code XXX 405 : Check that the endpoint accepts GET requests. APIcast only supports GET requests to test the integration. 403: Authentication parameters missing : If your API already has some authentication in place, APIcast will be unable to make a test request. 403: Authentication failed : If this is not the first service you have created with 3scale, check that you have created an application under the service with credentials to make the test request. If it is the first service you are integrating, ensure that you have not deleted the test account or application that you created on signup. 12.1.1.2. APIcast self-managed After you have successfully tested the integration with APIcast self-managed, you might want to host the API gateway yourself. Following are some errors you may encounter when you first install your self-managed gateway and call your API through it. upstream timed out (110: Connection timed out) while connecting to upstream Check that there are no firewalls or proxies between the API Gateway and the public Internet that would prevent your self-managed gateway from reaching 3scale. failed to get list of services: invalid status: 403 (Forbidden) 2018/06/04 08:04:49 [emerg] 14#14: [lua] configuration_loader.lua:134: init(): failed to load configuration, exiting (code 1) 2018/06/04 08:04:49 [warn] 22#22: *2 [lua] remote_v2.lua:163: call(): failed to get list of services: invalid status: 403 (Forbidden) url: https://example-admin.3scale.net/admin/api/services.json , context: ngx.timer ERROR: /opt/app-root/src/src/apicast/configuration_loader.lua:57: missing configuration Check that the Access Token that you used in the THREESCALE_PORTAL_ENDOINT value is correct and that it has the Account Management API scope. Verify it with a curl command: curl -v "https://example-admin.3scale.net/admin/api/services.json?access_token=<YOUR_ACCESS_TOKEN>" It should return a 200 response with a JSON body. If it returns an error status code, check the response body for details. service not found for host apicast.example.com 2018/06/04 11:06:15 [warn] 23#23: *495 [lua] find_service.lua:24: find_service(): service not found for host apicast.example.com, client: 172.17.0.1, server: _, request: "GET / HTTP/1.1", host: "apicast.example.com" This error indicates that the Public Base URL has not been configured properly. You should ensure that the configured Public Base URL is the same that you use for the request to self-managed APIcast. After configuring the correct Public Base URL: Ensure that APIcast is configured for "production" (default configuration for standalone APIcast if not overriden with THREESCALE_DEPLOYMENT_ENV variable). Ensure that you promote the configuration to production. Restart APIcast, if you have not configured auto-reloading of configuration using APICAST_CONFIGURATION_CACHE and APICAST_CONFIGURATION_LOADER environment variables. Following are some other symptoms that may point to an incorrect APIcast self-managed integration: Mapping rules not matched / Double counting of API calls : Depending on the way you have defined the mapping between methods and actual URL endpoints on your API, you might find that sometimes methods either don't get matched or get incremented more than once per request. To troubleshoot this, make a test call to your API with the 3scale debug header . This will return a list of all the methods that have been matched by the API call. Authentication parameters not found : Ensure your are sending the parameters to the correct location as specified in the Service Integration screen. If you do not send credentials as headers, the credentials must be sent as query parameters for GET requests and body parameters for all other HTTP methods. Use the 3scale debug header to double-check the credentials that are being read from the request by the API gateway. 12.1.2. Production issues It is rare to run into issues with your API gateway after you have fully tested your setup and have been live with your API for a while. However, here are some of the issues you might encounter in a live production environment. 12.1.2.1. Availability issues Availability issues are normally characterised by upstream timed out errors in your nginx error.log; example: upstream timed out (110: Connection timed out) while connecting to upstream, client: X.X.X.X, server: api.example.com, request: "GET /RESOURCE?CREDENTIALS HTTP/1.1", upstream: "http://Y.Y.Y.Y:80/RESOURCE?CREDENTIALS", host: "api.example.com" If you are experiencing intermittent 3scale availability issues, following may be the reasons for this: You are resolving to an old 3scale IP that is no longer in use. The latest version of the API gateway configuration files defines 3scale as a variable to force IP resolution each time. For a quick fix, reload your NGINX instance. For a long-term fix, ensure that instead of defining the 3scale backend in an upstream block, you define it as a variable within each server block; example: server { # Enabling the Lua code cache is strongly encouraged for production use. Here it is enabled . . . set USDthreescale_backend "https://su1.3scale.net:443"; When you refer to it: location = /threescale_authrep { internal; set USDprovider_key "YOUR_PROVIDER_KEY"; proxy_pass USDthreescale_backend/transactions/authrep.xml?provider_key=USDprovider_key&service_id=USDservice_id&USDusage&USDcredentials&log%5Bcode%5D=USDarg_code&log%5Brequest%5D=USDarg_req&log%5Bresponse%5D=USDarg_resp; } You are missing some 3scale IPs from your whitelist. Following is the current list of IPs that 3scale resolves to: 75.101.142.93 174.129.235.69 184.73.197.122 50.16.225.117 54.83.62.94 54.83.62.186 54.83.63.187 54.235.143.255 The above issues refer to problems with perceived 3scale availability. However, you might encounter similar issues with your API availability from the API gateway if your API is behind an AWS ELB. This is because NGINX, by default, does DNS resolution at start-up time and then caches the IP addresses. However, ELBs do not ensure static IP addresses and these might change frequently. Whenever the ELB changes to a different IP, NGINX is unable to reach it. The solution for this is similar to the above fix for forcing runtime DNS resolution. Set a specific DNS resolver such as Google DNS, by adding this line at the top of the http section: resolver 8.8.8.8 8.8.4.4; . Set your API base URL as a variable anywhere near the top of the server section. set USDapi_base "http://api.example.com:80"; Inside the location / section, find the proxy_pass line and replace it with proxy_pass USDapi_base; . 12.1.3. Post-deploy issues If you make changes to your API such as adding a new endpoint, you must ensure that you add a new method and URL mapping before downloading a new set of configuration files for your API gateway. The most common problem when you have modified the configuration downloaded from 3scale will be code errors in the Lua, which will result in a 500 - Internal server error such as: curl -v -X GET "http://localhost/" * About to connect() to localhost port 80 (#0) * Trying 127.0.0.1... connected > GET / HTTP/1.1 > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Host: localhost > Accept: */* > < HTTP/1.1 500 Internal Server Error < Server: openresty/1.5.12.1 < Date: Thu, 04 Feb 2016 10:22:25 GMT < Content-Type: text/html < Content-Length: 199 < Connection: close < <head><title>500 Internal Server Error</title></head> <center><h1>500 Internal Server Error</h1></center> <hr><center>openresty/1.5.12.1</center> * Closing connection #0 You can see the nginx error.log to know the cause, such as: 2016/02/04 11:22:25 [error] 8980#0: *1 lua entry thread aborted: runtime error: /home/pili/NGINX/troubleshooting/nginx.lua:66: bad argument #3 to '_newindex' (number expected, got nil) stack traceback: coroutine 0: [C]: in function '_newindex' /home/pili/NGINX/troubleshooting/nginx.lua:66: in function 'error_authorization_failed' /home/pili/NGINX/troubleshooting/nginx.lua:330: in function 'authrep' /home/pili/NGINX/troubleshooting/nginx.lua:283: in function 'authorize' /home/pili/NGINX/troubleshooting/nginx.lua:392: in function while sending to client, client: 127.0.0.1, server: api-2445581381726.staging.apicast.io, request: "GET / HTTP/1.1", host: "localhost" In the access.log this will look like the following: 127.0.0.1 - - [04/Feb/2016:11:22:25 +0100] "GET / HTTP/1.1" 500 199 "-" "curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3" The above section gives you a an overview of the most common, well-known issues that you might encounter at any stage of your 3scale journey. If all of these have been checked and you are still unable to find the cause and solution for your issue, you should proceed to the more detailed section on Identifying API request issues . Start at your API and work your way back to the client in order to try to identify the point of failure. 12.2. Handling API infrastructure issues If you are experiencing failures when connecting to a server, whether that is the API gateway, 3scale, or your API, the following troubleshooting steps should be your first port of call: 12.2.1. Can we connect? Use telnet to check the basic TCP/IP connectivity telnet api.example.com 443 Success telnet echo-api.3scale.net 80 Trying 52.21.167.109... Connected to tf-lb-i2t5pgt2cfdnbdfh2c6qqoartm-829217110.us-east-1.elb.amazonaws.com. Escape character is '^]'. Connection closed by foreign host. Failure telnet su1.3scale.net 443 Trying 174.129.235.69... telnet: Unable to connect to remote host: Connection timed out 12.2.2. Server connection issues Try to connect to the same server from different network locations, devices, and directions. For example, if your client is unable to reach your API, try to connect to your API from a machine that should have access such as the API gateway. If any of the attempted connections succeed, you can rule out any problems with the actual server and concentrate your troubleshooting on the network between them, as this is where the problem will most likely be. 12.2.3. Is it a DNS issue? Try to connect to the server by using its IP address instead of its hostname e.g. telnet 94.125.104.17 80 instead of telnet apis.io 80 This will rule out any problems with the DNS. You can get the IP address for a server using dig for example for 3scale dig su1.3scale.net or dig any su1.3scale.net if you suspect there may be multiple IPs that a host may resolve to. NB: Some hosts block `dig any` 12.2.4. Is it an SSL issue? You can use OpenSSL to test: Secure connections to a host or IP, such as from the shell prompt openssl s_client -connect su1.3scale.net:443 Output: CONNECTED(00000003) depth=1 C = US, O = GeoTrust Inc., CN = GeoTrust SSL CA - G3 verify error:num=20:unable to get local issuer certificate --- Certificate chain 0 s:/C=ES/ST=Barcelona/L=Barcelona/O=3scale Networks, S.L./OU=IT/CN=*.3scale.net i:/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3 1 s:/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3 i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA --- Server certificate -----BEGIN CERTIFICATE----- MIIE8zCCA9ugAwIBAgIQcz2Y9JNxH7f2zpOT0DajUjANBgkqhkiG9w0BAQsFADBE ... TRUNCATED ... 3FZigX+OpWLVRjYsr0kZzX+HCerYMwc= -----END CERTIFICATE----- subject=/C=ES/ST=Barcelona/L=Barcelona/O=3scale Networks, S.L./OU=IT/CN=*.3scale.net issuer=/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3 --- Acceptable client certificate CA names /C=ES/ST=Barcelona/L=Barcelona/O=3scale Networks, S.L./OU=IT/CN=*.3scale.net /C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3 Client Certificate Types: RSA sign, DSA sign, ECDSA sign Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1:RSA+MD5 Shared Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1 Peer signing digest: SHA512 Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 3281 bytes and written 499 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: A85EFD61D3BFD6C27A979E95E66DA3EC8F2E7B3007C0166A9BCBDA5DCA5477B8 Session-ID-ctx: Master-Key: F7E898F1D996B91D13090AE9D5624FF19DFE645D5DEEE2D595D1B6F79B1875CF935B3A4F6ECCA7A6D5EF852AE3D4108B Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 300 (seconds) TLS session ticket: 0000 - a8 8b 6c ac 9c 3c 60 78-2c 5c 8a de 22 88 06 15 ..l..<`x,\.."... 0010 - eb be 26 6c e6 7b 43 cc-ae 9b c0 27 6c b7 d9 13 ..&l.{C....'l... 0020 - 84 e4 0d d5 f1 ff 4c 08-7a 09 10 17 f3 00 45 2c ......L.z.....E, 0030 - 1b e7 47 0c de dc 32 eb-ca d7 e9 26 33 26 8b 8e ..G...2....&3&.. 0040 - 0a 86 ee f0 a9 f7 ad 8a-f7 b8 7b bc 8c c2 77 7b ..........{...w{ 0050 - ae b7 57 a8 40 1b 75 c8-25 4f eb df b0 2b f6 b7 [email protected].%O...+.. 0060 - 8b 8e fc 93 e4 be d6 60-0f 0f 20 f1 0a f2 cf 46 .......`.. ....F 0070 - b0 e6 a1 e5 31 73 c2 f5-d4 2f 57 d1 b0 8e 51 cc ....1s.../W...Q. 0080 - ff dd 6e 4f 35 e4 2c 12-6c a2 34 26 84 b3 0c 19 ..nO5.,.l.4&.... 0090 - 8a eb 80 e0 4d 45 f8 4a-75 8e a2 06 70 84 de 10 ....ME.Ju...p... Start Time: 1454932598 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- SSLv3 support (NOT supported by 3scale) openssl s_client -ssl3 -connect su.3scale.net:443 Output CONNECTED(00000003) 140735196860496:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:s3_pkt.c:1456:SSL alert number 40 140735196860496:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:s3_pkt.c:644: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 7 bytes and written 0 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : SSLv3 Cipher : 0000 Session-ID: Session-ID-ctx: Master-Key: Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None Start Time: 1454932872 Timeout : 7200 (sec) Verify return code: 0 (ok) --- For more details, see the OpenSSL man pages . 12.3. Identifying API request issues To identify where an issue with requests to your API might lie, go through the following checks. 12.3.1. API To confirm that the API is up and responding to requests, make the same request directly to your API (not going through the API gateway). You should ensure that you are sending the same parameters and headers as the request that goes through the API gateway. If you are unsure of the exact request that is failing, capture the traffic between the API gateway and your API. If the call succeeds, you can rule out any problems with the API, otherwise you should troubleshoot your API further. 12.3.2. API Gateway > API To rule out any network issues between the API gateway and the API, make the same call as before - directly to your API - from your API gateway server. If the call succeeds, you can move on to troubleshooting the API gateway itself. 12.3.3. API gateway There are a number of steps to go through to check that the API gateway is working correctly. 12.3.3.1. Is the API gateway up and running? Log in to the machine where the gateway is running. If this fails, your gateway server might be down. After you have logged in, check that the NGINX process is running. For this, run ps ax | grep nginx or htop . NGINX is running if you see nginx master process and nginx worker process in the list. 12.3.3.2. Are there any errors in the gateway logs? Following are some common errors you might see in the gateway logs, for example in error.log: API gateway can't connect to API upstream timed out (110: Connection timed out) while connecting to upstream, client: X.X.X.X, server: api.example.com, request: "GET /RESOURCE?CREDENTIALS HTTP/1.1", upstream: "http://Y.Y.Y.Y:80/RESOURCE?CREDENTIALS", host: "api.example.com" API gateway cannot connect to 3scale 2015/11/20 11:33:51 [error] 3578#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /api/activities.json?user_key=USER_KEY HTTP/1.1", subrequest: "/threescale_authrep", upstream: "https://54.83.62.186:443/transactions/authrep.xml?provider_key=YOUR_PROVIDER_KEY&service_id=SERVICE_ID&usage[hits]=1&user_key=USER_KEY&log%5Bcode%5D=", host: "localhost" 12.3.4. API gateway > 3scale API Management Once you are sure the API gateway is running correctly, the step is troubleshooting the connection between the API gateway and 3scale. 12.3.4.1. Can the API gateway reach 3scale API Management? If you are using NGINX as your API gateway, the following message displays in the nginx error logs when the gateway is unable to contact 3scale. 2015/11/20 11:33:51 [error] 3578#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /api/activities.json?user_key=USER_KEY HTTP/1.1", subrequest: "/threescale_authrep", upstream: "https://54.83.62.186:443/transactions/authrep.xml?provider_key=YOUR_PROVIDER_KEY&service_id=SERVICE_ID&usage[hits]=1&user_key=USER_KEY&log%5Bcode%5D=", host: "localhost" Here, note the upstream value. This IP corresponds to one of the IPs that the 3scale product resolves to. This implies that there is a problem reaching 3scale. You can do a reverse DNS lookup to check the domain for an IP by calling nslookup . For example, because the API gateway is unable to reach 3scale, it does not mean that 3scale is down. One of the most common reasons for this would be firewall rules preventing the API gateway from connecting to 3scale. There may be network issues between the gateway and 3scale that could cause connections to timeout. In this case, you should go through the steps in troubleshooting generic connectivity issues to identify where the problem lies. To rule out networking issues, use traceroute or MTR to check the routing and packet transmission. You can also run the same command from a machine that is able to connect to 3scale and your API gateway and compare the output. Additionally, to see the traffic that is being sent between your API gateway and 3scale, you can use tcpdump as long as you temporarily switch to using the HTTP endpoint for the 3scale product ( su1.3scale.net ). 12.3.4.2. Is the API gateway resolving 3scale API Management addresses correctly? Ensure you have the resolver directive added to your nginx.conf. For example, in nginx.conf: http { lua_shared_dict api_keys 10m; server_names_hash_bucket_size 128; lua_package_path ";;USDprefix/?.lua;"; init_by_lua 'math.randomseed(ngx.time()) ; cjson = require("cjson")'; resolver 8.8.8.8 8.8.4.4; You can substitute the Google DNS (8.8.8.8 and 8.8.4.4) with your preferred DNS. To check DNS resolution from your API gateway, call nslookup as follows with the specified resolver IP: nslookup su1.3scale.net 8.8.8.8 ;; connection timed out; no servers could be reached The above example shows the response returned if Google DNS cannot be reached. If this is the case, you must update the resolver IPs. You might also see the following alert in your nginx error.log: 2016/05/09 14:15:15 [alert] 9391#0: send() failed (1: Operation not permitted) while resolving, resolver: 8.8.8.8:53 Finally, run dig any su1.3scale.net to see the IP addresses currently in operation for the 3scale Service Management API. Note that this is not the entire range of IP addresses that might be used by 3scale. Some may be swapped in and out for capacity reasons. Additionally, you may add more domain names for the 3scale service in the future. For this you should always test against the specific address that are supplied to you during integration, if applicable. 12.3.4.3. Is the API gateway calling 3scale API Management correctly? If you want to check the request your API gateway is making to 3scale for troubleshooting purposes only you can add the following snippet to the 3scale authrep location in nginx.conf ( /threescale_authrep for API Key and App\_id authentication modes): body_filter_by_lua_block{ if ngx.req.get_headers()["X-3scale-debug"] == ngx.var.provider_key then local resp = "" ngx.ctx.buffered = (ngx.ctx.buffered or "") .. string.sub(ngx.arg[1], 1, 1000) if ngx.arg[2] then resp = ngx.ctx.buffered end ngx.log(0, ngx.req.raw_header()) ngx.log(0, resp) end } This snippet will add the following extra logging to the nginx error.log when the X-3scale-debug header is sent, e.g. curl -v -H 'X-3scale-debug: YOUR_PROVIDER_KEY' -X GET "https://726e3b99.ngrok.com/api/contacts.json?access_token=7c6f24f5" This will produce the following log entries: 2016/05/05 14:24:33 [] 7238#0: *57 [lua] body_filter_by_lua:7: GET /api/contacts.json?access_token=7c6f24f5 HTTP/1.1 Host: 726e3b99.ngrok.io User-Agent: curl/7.43.0 Accept: */* X-Forwarded-Proto: https X-Forwarded-For: 2.139.235.79 while sending to client, client: 127.0.0.1, server: pili-virtualbox, request: "GET /api/contacts.json?access_token=7c6f24f5 HTTP/1.1", subrequest: "/threescale_authrep", upstream: "https://54.83.62.94:443/transactions/oauth_authrep.xml?provider_key=REDACTED&service_id=REDACTED&usage[hits]=1&access_token=7c6f24f5", host: "726e3b99.ngrok.io" 2016/05/05 14:24:33 [] 7238#0: *57 [lua] body_filter_by_lua:8: <?xml version="1.0" encoding="UTF-8"?><error code="access_token_invalid">access_token "7c6f24f5" is invalid: expired or never defined</error> while sending to client, client: 127.0.0.1, server: pili-virtualbox, request: "GET /api/contacts.json?access_token=7c6f24f5 HTTP/1.1", subrequest: "/threescale_authrep", upstream: "https://54.83.62.94:443/transactions/oauth_authrep.xml?provider_key=REDACTED&service_id=REDACTED&usage[hits]=1&access_token=7c6f24f5", host: "726e3b99.ngrok.io" The first entry ( 2016/05/05 14:24:33 [] 7238#0: *57 [lua] body_filter_by_lua:7: ) prints out the request headers sent to 3scale, in this case: Host, User-Agent, Accept, X-Forwarded-Proto and X-Forwarded-For. The second entry ( 2016/05/05 14:24:33 [] 7238#0: *57 [lua] body_filter_by_lua:8: ) prints out the response from 3scale, in this case: <error code="access_token_invalid">access_token "7c6f24f5" is invalid: expired or never defined</error> . Both will print out the original request ( GET /api/contacts.json?access_token=7c6f24f5 ) and subrequest location ( /threescale_authrep ) as well as the upstream request ( upstream: "https://54.83.62.94:443/transactions/threescale_authrep.xml?provider_key=REDACTED&service_id=REDACTED&usage[hits]=1&access_token=7c6f24f5" .) This last value allows you to see which of the 3scale IPs have been resolved and also the exact request made to 3scale. 12.3.5. 3scale API Management 12.3.5.1. Is 3scale API Management returning an error? It is also possible that 3scale is available but is returning an error to your API gateway which would prevent calls going through to your API. Try to make the authorization call directly in 3scale and check the response. If you get an error, check the #troubleshooting-api-error-codes[Error Codes] section to see what the issue is. 12.3.5.2. Use the 3scale API Management debug headers You can also turn on the 3scale debug headers by making a call to your API with the X-3scale-debug header, example: curl -v -X GET "https://api.example.com/endpoint?user_key" X-3scale-debug: YOUR_SERVICE_TOKEN This will return the following headers with the API response: X-3scale-matched-rules: /, /api/contacts.json < X-3scale-credentials: access_token=TOKEN_VALUE < X-3scale-usage: usage[hits]=2 < X-3scale-hostname: HOSTNAME_VALUE 12.3.5.3. Check the integration errors You can also check the integration errors on your Admin Portal to check for any issues reporting traffic to 3scale. See https://YOUR_DOMAIN-admin.3scale.net/apiconfig/errors. One of the reasons for integration errors can be sending credentials in the headers with underscores_in_headers directive not enabled in server block. 12.3.6. Client API gateway 12.3.6.1. Is the API gateway reachable from the public internet? Try directing a browser to the IP address (or domain name) of your gateway server. If this fails, ensure that you have opened the firewall on the relevant ports. 12.3.6.2. Is the API gateway reachable by the client? If possible, try to connect to the API gateway from the client using one of the methods outlined earlier (telnet, curl, etc.) If the connection fails, the problem lies in the network between the two. Otherwise, you should move on to troubleshooting the client making the calls to the API. 12.3.7. Client 12.3.7.1. Test the same call using a different client If a request is not returning the expected result, test with a different HTTP client. For example, if you are calling an API with a Java HTTP client and you see something wrong, cross-check with cURL. You can also call the API through a proxy between the client and the gateway to capture the exact parameters and headers being sent by the client. 12.3.7.2. Inspect the traffic sent by client Use a tool like Wireshark to see the requests being made by the client. This will allow you to identify if the client is making calls to the API and the details of the request. 12.4. ActiveDocs issues Sometimes calls that work when you call the API from the command line fail when going through ActiveDocs. To enable ActiveDocs calls to work, we send these out through a proxy on our side. This proxy will add certain headers that can sometimes cause issues on the API if they are not expected. To identify if this is the case, try the following steps: 12.4.1. Use petstore.swagger.io Swagger provides a hosted swagger-ui at petstore.swagger.io which you can use to test your Swagger spec and API going through the latest version of swagger-ui. If both swagger-ui and ActiveDocs fail in the same way, you can rule out any issues with ActiveDocs or the ActiveDocs proxy and focus the troubleshooting on your own spec. Alternatively, you can check the swagger-ui GitHub repo for any known issues with the current version of swagger-ui. 12.4.2. Check that firewall allows connections from ActiveDocs proxy We recommend to not whitelist IP address for clients using your API. The ActiveDocs proxy uses floating IP addresses for high availability and there is currently no mechanism to notify of any changes to these IPs. 12.4.3. Call the API with incorrect credentials One way to identify whether the ActiveDocs proxy is working correctly is to call your API with invalid credentials. This will help you to confirm or rule out any problems with both the ActiveDocs proxy and your API gateway. If you get a 403 code back from the API call (or from the code you have configured on your gateway for invalid credentials), the problem lies with your API because the calls are reaching your gateway. 12.4.4. Compare calls To identify any differences in headers and parameters between calls made from ActiveDocs versus outside of ActiveDocs, run calls through services such as APItools on-premise or Runscope. This will allow you to inspect and compare your HTTP calls before sending them to your API. You will then be able to identify potential headers and/or parameters in the request that could cause issues. 12.5. Logging in NGINX For a comprehensive guide on this, see the NGINX Logging and Monitoring docs. 12.5.1. Enabling debugging log To find out more about enabling debugging log, see the NGINX debugging log documentation . 12.6. 3scale error codes To double-check the error codes that are returned by the 3scale Service Management API endpoints, see the 3scale API Documentation page by following these steps: Click the question mark (?) icon, which is in the upper-right corner of the Admin Portal. Choose 3scale API Docs . The following is a list HTTP response codes returned by 3scale, and the conditions under which they are returned: 400: Bad request. This can be because of: Invalid encoding Payload too large Content type is invalid (for POST calls). Valid values for the Content-Type header are: application/x-www-form-urlencoded , multipart/form-data , or empty header. 403: Credentials are not valid Sending body data to 3scale for a GET request 404: Non-existent entity referenced, such as applications, metrics, etc. 409: Usage limits exceeded Application is not active Application key is invalid or missing (for app_id/app_key authentication method) Referrer is not allowed or missing (when referrer filters are enabled and required) 422: Missing required parameters Most of these error responses will also contain an XML body with a machine readable error category and a human readable explanation. When using the standard API gateway configuration, any return code different from 200 provided by 3scale can result in a response to the client with one of the following codes: 403 404
[ "2018/06/04 08:04:49 [emerg] 14#14: [lua] configuration_loader.lua:134: init(): failed to load configuration, exiting (code 1) 2018/06/04 08:04:49 [warn] 22#22: *2 [lua] remote_v2.lua:163: call(): failed to get list of services: invalid status: 403 (Forbidden) url: https://example-admin.3scale.net/admin/api/services.json , context: ngx.timer ERROR: /opt/app-root/src/src/apicast/configuration_loader.lua:57: missing configuration", "2018/06/04 11:06:15 [warn] 23#23: *495 [lua] find_service.lua:24: find_service(): service not found for host apicast.example.com, client: 172.17.0.1, server: _, request: \"GET / HTTP/1.1\", host: \"apicast.example.com\"", "upstream timed out (110: Connection timed out) while connecting to upstream, client: X.X.X.X, server: api.example.com, request: \"GET /RESOURCE?CREDENTIALS HTTP/1.1\", upstream: \"http://Y.Y.Y.Y:80/RESOURCE?CREDENTIALS\", host: \"api.example.com\"", "server { # Enabling the Lua code cache is strongly encouraged for production use. Here it is enabled . . . set USDthreescale_backend \"https://su1.3scale.net:443\";", "location = /threescale_authrep { internal; set USDprovider_key \"YOUR_PROVIDER_KEY\"; proxy_pass USDthreescale_backend/transactions/authrep.xml?provider_key=USDprovider_key&service_id=USDservice_id&USDusage&USDcredentials&log%5Bcode%5D=USDarg_code&log%5Brequest%5D=USDarg_req&log%5Bresponse%5D=USDarg_resp; }", "curl -v -X GET \"http://localhost/\" * About to connect() to localhost port 80 (#0) * Trying 127.0.0.1... connected > GET / HTTP/1.1 > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Host: localhost > Accept: */* > < HTTP/1.1 500 Internal Server Error < Server: openresty/1.5.12.1 < Date: Thu, 04 Feb 2016 10:22:25 GMT < Content-Type: text/html < Content-Length: 199 < Connection: close < <head><title>500 Internal Server Error</title></head> <center><h1>500 Internal Server Error</h1></center> <hr><center>openresty/1.5.12.1</center> * Closing connection #0", "2016/02/04 11:22:25 [error] 8980#0: *1 lua entry thread aborted: runtime error: /home/pili/NGINX/troubleshooting/nginx.lua:66: bad argument #3 to '_newindex' (number expected, got nil) stack traceback: coroutine 0: [C]: in function '_newindex' /home/pili/NGINX/troubleshooting/nginx.lua:66: in function 'error_authorization_failed' /home/pili/NGINX/troubleshooting/nginx.lua:330: in function 'authrep' /home/pili/NGINX/troubleshooting/nginx.lua:283: in function 'authorize' /home/pili/NGINX/troubleshooting/nginx.lua:392: in function while sending to client, client: 127.0.0.1, server: api-2445581381726.staging.apicast.io, request: \"GET / HTTP/1.1\", host: \"localhost\"", "127.0.0.1 - - [04/Feb/2016:11:22:25 +0100] \"GET / HTTP/1.1\" 500 199 \"-\" \"curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3\"", "telnet echo-api.3scale.net 80 Trying 52.21.167.109 Connected to tf-lb-i2t5pgt2cfdnbdfh2c6qqoartm-829217110.us-east-1.elb.amazonaws.com. Escape character is '^]'. Connection closed by foreign host.", "telnet su1.3scale.net 443 Trying 174.129.235.69 telnet: Unable to connect to remote host: Connection timed out", "CONNECTED(00000003) depth=1 C = US, O = GeoTrust Inc., CN = GeoTrust SSL CA - G3 verify error:num=20:unable to get local issuer certificate --- Certificate chain 0 s:/C=ES/ST=Barcelona/L=Barcelona/O=3scale Networks, S.L./OU=IT/CN=*.3scale.net i:/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3 1 s:/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3 i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA --- Server certificate -----BEGIN CERTIFICATE----- MIIE8zCCA9ugAwIBAgIQcz2Y9JNxH7f2zpOT0DajUjANBgkqhkiG9w0BAQsFADBE TRUNCATED 3FZigX+OpWLVRjYsr0kZzX+HCerYMwc= -----END CERTIFICATE----- subject=/C=ES/ST=Barcelona/L=Barcelona/O=3scale Networks, S.L./OU=IT/CN=*.3scale.net issuer=/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3 --- Acceptable client certificate CA names /C=ES/ST=Barcelona/L=Barcelona/O=3scale Networks, S.L./OU=IT/CN=*.3scale.net /C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3 Client Certificate Types: RSA sign, DSA sign, ECDSA sign Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1:RSA+MD5 Shared Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1 Peer signing digest: SHA512 Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 3281 bytes and written 499 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: A85EFD61D3BFD6C27A979E95E66DA3EC8F2E7B3007C0166A9BCBDA5DCA5477B8 Session-ID-ctx: Master-Key: F7E898F1D996B91D13090AE9D5624FF19DFE645D5DEEE2D595D1B6F79B1875CF935B3A4F6ECCA7A6D5EF852AE3D4108B Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 300 (seconds) TLS session ticket: 0000 - a8 8b 6c ac 9c 3c 60 78-2c 5c 8a de 22 88 06 15 ..l..<`x,\\..\" 0010 - eb be 26 6c e6 7b 43 cc-ae 9b c0 27 6c b7 d9 13 ..&l.{C....'l 0020 - 84 e4 0d d5 f1 ff 4c 08-7a 09 10 17 f3 00 45 2c ......L.z.....E, 0030 - 1b e7 47 0c de dc 32 eb-ca d7 e9 26 33 26 8b 8e ..G...2....&3&.. 0040 - 0a 86 ee f0 a9 f7 ad 8a-f7 b8 7b bc 8c c2 77 7b ..........{...w{ 0050 - ae b7 57 a8 40 1b 75 c8-25 4f eb df b0 2b f6 b7 [email protected].%O...+.. 0060 - 8b 8e fc 93 e4 be d6 60-0f 0f 20 f1 0a f2 cf 46 .......`.. ....F 0070 - b0 e6 a1 e5 31 73 c2 f5-d4 2f 57 d1 b0 8e 51 cc ....1s.../W...Q. 0080 - ff dd 6e 4f 35 e4 2c 12-6c a2 34 26 84 b3 0c 19 ..nO5.,.l.4&. 0090 - 8a eb 80 e0 4d 45 f8 4a-75 8e a2 06 70 84 de 10 ....ME.Ju...p Start Time: 1454932598 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) ---", "CONNECTED(00000003) 140735196860496:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:s3_pkt.c:1456:SSL alert number 40 140735196860496:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:s3_pkt.c:644: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 7 bytes and written 0 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : SSLv3 Cipher : 0000 Session-ID: Session-ID-ctx: Master-Key: Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None Start Time: 1454932872 Timeout : 7200 (sec) Verify return code: 0 (ok) ---", "upstream timed out (110: Connection timed out) while connecting to upstream, client: X.X.X.X, server: api.example.com, request: \"GET /RESOURCE?CREDENTIALS HTTP/1.1\", upstream: \"http://Y.Y.Y.Y:80/RESOURCE?CREDENTIALS\", host: \"api.example.com\"", "2015/11/20 11:33:51 [error] 3578#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 127.0.0.1, server: , request: \"GET /api/activities.json?user_key=USER_KEY HTTP/1.1\", subrequest: \"/threescale_authrep\", upstream: \"https://54.83.62.186:443/transactions/authrep.xml?provider_key=YOUR_PROVIDER_KEY&service_id=SERVICE_ID&usage[hits]=1&user_key=USER_KEY&log%5Bcode%5D=\", host: \"localhost\"", "2015/11/20 11:33:51 [error] 3578#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 127.0.0.1, server: , request: \"GET /api/activities.json?user_key=USER_KEY HTTP/1.1\", subrequest: \"/threescale_authrep\", upstream: \"https://54.83.62.186:443/transactions/authrep.xml?provider_key=YOUR_PROVIDER_KEY&service_id=SERVICE_ID&usage[hits]=1&user_key=USER_KEY&log%5Bcode%5D=\", host: \"localhost\"", "http { lua_shared_dict api_keys 10m; server_names_hash_bucket_size 128; lua_package_path \";;USDprefix/?.lua;\"; init_by_lua 'math.randomseed(ngx.time()) ; cjson = require(\"cjson\")'; resolver 8.8.8.8 8.8.4.4;", "nslookup su1.3scale.net 8.8.8.8 ;; connection timed out; no servers could be reached", "2016/05/09 14:15:15 [alert] 9391#0: send() failed (1: Operation not permitted) while resolving, resolver: 8.8.8.8:53", "body_filter_by_lua_block{ if ngx.req.get_headers()[\"X-3scale-debug\"] == ngx.var.provider_key then local resp = \"\" ngx.ctx.buffered = (ngx.ctx.buffered or \"\") .. string.sub(ngx.arg[1], 1, 1000) if ngx.arg[2] then resp = ngx.ctx.buffered end ngx.log(0, ngx.req.raw_header()) ngx.log(0, resp) end }", "2016/05/05 14:24:33 [] 7238#0: *57 [lua] body_filter_by_lua:7: GET /api/contacts.json?access_token=7c6f24f5 HTTP/1.1 Host: 726e3b99.ngrok.io User-Agent: curl/7.43.0 Accept: */* X-Forwarded-Proto: https X-Forwarded-For: 2.139.235.79 while sending to client, client: 127.0.0.1, server: pili-virtualbox, request: \"GET /api/contacts.json?access_token=7c6f24f5 HTTP/1.1\", subrequest: \"/threescale_authrep\", upstream: \"https://54.83.62.94:443/transactions/oauth_authrep.xml?provider_key=REDACTED&service_id=REDACTED&usage[hits]=1&access_token=7c6f24f5\", host: \"726e3b99.ngrok.io\" 2016/05/05 14:24:33 [] 7238#0: *57 [lua] body_filter_by_lua:8: <?xml version=\"1.0\" encoding=\"UTF-8\"?><error code=\"access_token_invalid\">access_token \"7c6f24f5\" is invalid: expired or never defined</error> while sending to client, client: 127.0.0.1, server: pili-virtualbox, request: \"GET /api/contacts.json?access_token=7c6f24f5 HTTP/1.1\", subrequest: \"/threescale_authrep\", upstream: \"https://54.83.62.94:443/transactions/oauth_authrep.xml?provider_key=REDACTED&service_id=REDACTED&usage[hits]=1&access_token=7c6f24f5\", host: \"726e3b99.ngrok.io\"", "X-3scale-matched-rules: /, /api/contacts.json < X-3scale-credentials: access_token=TOKEN_VALUE < X-3scale-usage: usage[hits]=2 < X-3scale-hostname: HOSTNAME_VALUE" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/operating_red_hat_3scale_api_management/troubleshooting-the-api-infrastructure
10.5.55. AddType
10.5.55. AddType Use the AddType directive to define or override a default MIME type and file extension pairs. The following example directive tells the Apache HTTP Server to recognize the .tgz file extension:
[ "AddType application/x-tar .tgz" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-addtype
Chapter 1. Overview
Chapter 1. Overview AMQ JavaScript is a library for developing messaging applications. It enables you to write JavaScript applications that send and receive AMQP messages. AMQ JavaScript is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.9 Release Notes . AMQ JavaScript is based on the Rhea messaging library. For detailed API documentation, see the AMQ JavaScript API reference . 1.1. Key features An event-driven API that simplifies integration with existing applications SSL/TLS for secure communication Flexible SASL authentication Automatic reconnect and failover Seamless conversion between AMQP and language-native data types Access to all the features and capabilities of AMQP 1.0 1.2. Supported standards and protocols AMQ JavaScript supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Simple Authentication and Security Layer (SASL) mechanisms ANONYMOUS, PLAIN, and EXTERNAL Modern TCP with IPv6 1.3. Supported configurations AMQ JavaScript supports the OS and language versions listed below. For more information, see Red Hat AMQ 7 Supported Configurations . Red Hat Enterprise Linux 7 with Node.js 6 and 8 from Software Collections Red Hat Enterprise Linux 8 with Node.js 10 Microsoft Windows 10 Pro with Node.js 10 Microsoft Windows Server 2012 R2 and 2016 with Node.js 10 AMQ JavaScript is supported in combination with the following AMQ components and versions: All versions of AMQ Broker All versions of AMQ Interconnect A-MQ 6 versions 6.2.1 and newer 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description Container A top-level container of connections. Connection A channel for communication between two peers on a network. It contains sessions. Session A context for sending and receiving messages. It contains senders and receivers. Sender A channel for sending messages to a target. It has a target. Receiver A channel for receiving messages from a source. It has a source. Source A named point of origin for messages. Target A named destination for messages. Message An application-specific piece of information. Delivery A message transfer. AMQ JavaScript sends and receives messages . Messages are transferred between connected peers over senders and receivers . Senders and receivers are established over sessions . Sessions are established over connections . Connections are established between two uniquely identified containers . Though a connection can have multiple sessions, often this is not needed. The API allows you to ignore sessions unless you require them. A sending peer creates a sender to send messages. The sender has a target that identifies a queue or topic at the remote peer. A receiving peer creates a receiver to receive messages. The receiver has a source that identifies a queue or topic at the remote peer. The sending of a message is called a delivery . The message is the content sent, including all metadata such as headers and annotations. The delivery is the protocol exchange associated with the transfer of that content. To indicate that a delivery is complete, either the sender or the receiver settles it. When the other side learns that it has been settled, it will no longer communicate about that delivery. The receiver can also indicate whether it accepts or rejects the message. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
[ "cd <project-dir>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_javascript_client/overview
5.4.16.8. Setting a RAID fault policy
5.4.16.8. Setting a RAID fault policy LVM RAID handles device failures in an automatic fashion based on the preferences defined by the raid_fault_policy field in the lvm.conf file. If the raid_fault_policy field is set to allocate , the system will attempt to replace the failed device with a spare device from the volume group. If there is no available spare device, this will be reported to the system log. If the raid_fault_policy field is set to warn , the system will produce a warning and the log will indicate that a device has failed. This allows the user to determine the course of action to take. As long as there are enough devices remaining to support usability, the RAID logical volume will continue to operate. 5.4.16.8.1. The allocate RAID Fault Policy In the following example, the raid_fault_policy field has been set to allocate in the lvm.conf file. The RAID logical volume is laid out as follows. If the /dev/sde device fails, the system log will display error messages. Since the raid_fault_policy field has been set to allocate , the failed device is replaced with a new device from the volume group. Note that even though the failed device has been replaced, the display still indicates that LVM could not find the failed device. This is because, although the failed device has been removed from the RAID logical volume, the failed device has not yet been removed from the volume group. To remove the failed device from the volume group, you can execute vgreduce --removemissing VG . If the raid_fault_policy has been set to allocate but there are no spare devices, the allocation will fail, leaving the logical volume as it is. If the allocation fails, you have the option of fixing the drive, then deactivating and activating the logical volume, as described in Section 5.4.16.8.2, "The warn RAID Fault Policy" . Alternately, you can replace the failed device, as described in Section 5.4.16.9, "Replacing a RAID device" .
[ "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "grep lvm /var/log/messages Jan 17 15:57:18 bp-01 lvm[8599]: Device #0 of raid1 array, my_vg-my_lv, has failed. Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994294784: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994376704: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 0: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 4096: Input/output error Jan 17 15:57:19 bp-01 lvm[8599]: Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy. Jan 17 15:57:27 bp-01 lvm[8599]: raid1 array, my_vg-my_lv, is not in-sync. Jan 17 15:57:36 bp-01 lvm[8599]: raid1 array, my_vg-my_lv, is now in-sync.", "lvs -a -o name,copy_percent,devices vg Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy. LV Copy% Devices lv 100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0) [lv_rimage_0] /dev/sdh1(1) [lv_rimage_1] /dev/sdf1(1) [lv_rimage_2] /dev/sdg1(1) [lv_rmeta_0] /dev/sdh1(0) [lv_rmeta_1] /dev/sdf1(0) [lv_rmeta_2] /dev/sdg1(0)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/raid-faultpolicy
Chapter 53. MySQL Sink
Chapter 53. MySQL Sink Send data to a MySQL Database. This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query: 'INSERT INTO accounts (username,city) VALUES (:#username,:#city)' The Kamelet needs to receive as input something like: '{ "username":"oscerd", "city":"Rome"}' 53.1. Configuration Options The following table summarizes the configuration options available for the mysql-sink Kamelet: Property Name Description Type Default Example databaseName * Database Name The Database Name we are pointing string password * Password The password to use for accessing a secured MySQL Database string query * Query The Query to execute against the MySQL Database string "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName * Server Name Server Name for the data source string "localhost" username * Username The username to use for accessing a secured MySQL Database string serverPort Server Port Server Port for the data source string 3306 Note Fields marked with an asterisk (*) are mandatory. 53.2. Dependencies At runtime, the mysql-sink Kamelet relies upon the presence of the following dependencies: camel:jackson camel:kamelet camel:sql mvn:org.apache.commons:commons-dbcp2:2.7.0.redhat-00001 mvn:mysql:mysql-connector-java 53.3. Usage This section describes how you can use the mysql-sink . 53.3.1. Knative Sink You can use the mysql-sink Kamelet as a Knative sink by binding it to a Knative object. mysql-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mysql-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mysql-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 53.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 53.3.1.2. Procedure for using the cluster CLI Save the mysql-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f mysql-sink-binding.yaml 53.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 53.3.2. Kafka Sink You can use the mysql-sink Kamelet as a Kafka sink by binding it to a Kafka topic. mysql-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mysql-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mysql-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 53.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 53.3.2.2. Procedure for using the cluster CLI Save the mysql-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f mysql-sink-binding.yaml 53.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 53.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/mysql-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mysql-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mysql-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"", "apply -f mysql-sink-binding.yaml", "kamel bind channel:mychannel mysql-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mysql-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mysql-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"", "apply -f mysql-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mysql-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/mysql-sink
Chapter 2. Accessing the built-in API reference
Chapter 2. Accessing the built-in API reference You can access the full API reference on your Satellite Server. Procedure In your browser, access the following URL: Replace satellite.example.com with the FQDN of your Satellite Server.
[ "https:// satellite.example.com /apidoc/v2.html" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_satellite_rest_api/accessing-the-built-in-api-reference
8.6. Creating a Remediation Ansible Playbook to Align the System with a Specific Baseline
8.6. Creating a Remediation Ansible Playbook to Align the System with a Specific Baseline Use this procedure to create an Ansible playbook containing only the remediations that are needed to align your system with a specific baseline. This example uses the Protection Profile for General Purpose Operating Systems (OSPP). With this procedure, you create a smaller playbook that does not cover already satisfied requirements. By following these steps, you do not modify your system in any way, you only prepare a file for later application. Prerequisites The scap-security-guide package is installed on your system. Procedure Scan the system and save the results: Generate an Ansible playbook based on the file generated in the step: The ospp-remediations.yml file contains Ansible remediations for rules that failed during the scan performed in step 1. After you review this generated file, you can apply it with the ansible-playbook ospp-remediations.yml command. Verification In a text editor of your choice, review that the ospp-remediations.yml file contains rules that failed in the scan performed in step 1. Additional Resources scap-security-guide(8) and oscap(8) man pages Ansible Documentation
[ "~]# oscap xccdf eval --profile ospp --results ospp-results.xml /usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml", "~]# oscap xccdf generate fix --fix-type ansible --profile ospp --output ospp-remediations.yml ospp-results.xml" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/creating-a-remediation-ansible-playbook-to-align-the-system-with-baseline_scanning-the-system-for-configuration-compliance-and-vulnerabilities
Chapter 20. Directory Servers
Chapter 20. Directory Servers 20.1. OpenLDAP LDAP (Lightweight Directory Access Protocol) is a set of open protocols used to access centrally stored information over a network. It is based on the X.500 standard for directory sharing, but is less complex and resource-intensive. For this reason, LDAP is sometimes referred to as " X.500 Lite " . Like X.500, LDAP organizes information in a hierarchical manner using directories. These directories can store a variety of information such as names, addresses, or phone numbers, and can even be used in a manner similar to the Network Information Service ( NIS ), enabling anyone to access their account from any machine on the LDAP enabled network. LDAP is commonly used for centrally managed users and groups, user authentication, or system configuration. It can also serve as a virtual phone directory, allowing users to easily access contact information for other users. Additionally, it can refer a user to other LDAP servers throughout the world, and thus provide an ad-hoc global repository of information. However, it is most frequently used within individual organizations such as universities, government departments, and private companies. This section covers the installation and configuration of OpenLDAP 2.4 , an open source implementation of the LDAPv2 and LDAPv3 protocols. 20.1.1. Introduction to LDAP Using a client-server architecture, LDAP provides a reliable means to create a central information directory accessible from the network. When a client attempts to modify information within this directory, the server verifies the user has permission to make the change, and then adds or updates the entry as requested. To ensure the communication is secure, the Transport Layer Security ( TLS ) cryptographic protocol can be used to prevent an attacker from intercepting the transmission. Important The OpenLDAP suite in Red Hat Enterprise Linux 6 no longer uses OpenSSL. Instead, it uses the Mozilla implementation of Network Security Services ( NSS ). OpenLDAP continues to work with existing certificates, keys, and other TLS configuration. For more information on how to configure it to use Mozilla certificate and key database, see How do I use TLS/SSL with Mozilla NSS . Important Due to the vulnerability described in Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) for components that do not allow SSLv3 to be disabled via configuration settings , Red Hat recommends that you do not rely on the SSLv3 protocol for security. OpenLDAP is one of the system components that do not provide configuration parameters that allow SSLv3 to be effectively disabled. To mitigate the risk, it is recommended that you use the stunnel command to provide a secure tunnel, and disable stunnel from using SSLv3 . For more information on using stunnel , see the Red Hat Enterprise Linux 6 Security Guide . The LDAP server supports several database systems, which gives administrators the flexibility to choose the best suited solution for the type of information they are planning to serve. Because of a well-defined client Application Programming Interface ( API ), the number of applications able to communicate with an LDAP server is numerous, and increasing in both quantity and quality. 20.1.1.1. LDAP Terminology The following is a list of LDAP-specific terms that are used within this chapter: entry A single unit within an LDAP directory. Each entry is identified by its unique Distinguished Name ( DN ). attribute Information directly associated with an entry. For example, if an organization is represented as an LDAP entry, attributes associated with this organization might include an address, a fax number, etc. Similarly, people can be represented as entries with common attributes such as personal telephone number or email address. An attribute can either have a single value, or an unordered space-separated list of values. While certain attributes are optional, others are required. Required attributes are specified using the objectClass definition, and can be found in schema files located in the /etc/openldap/slapd.d/cn=config/cn=schema/ directory. The assertion of an attribute and its corresponding value is also referred to as a Relative Distinguished Name ( RDN ). Unlike distinguished names that are unique globally, a relative distinguished name is only unique per entry. LDIF The LDAP Data Interchange Format ( LDIF ) is a plain text representation of an LDAP entry. It takes the following form: The optional id is a number determined by the application that is used to edit the entry. Each entry can contain as many attribute_type and attribute_value pairs as needed, as long as they are all defined in a corresponding schema file. A blank line indicates the end of an entry. 20.1.1.2. OpenLDAP Features OpenLDAP suite provides a number of important features: LDAPv3 Support - Many of the changes in the protocol since LDAP version 2 are designed to make LDAP more secure. Among other improvements, this includes the support for Simple Authentication and Security Layer ( SASL ), and Transport Layer Security ( TLS ) protocols. LDAP Over IPC - The use of inter-process communication ( IPC ) enhances security by eliminating the need to communicate over a network. IPv6 Support - OpenLDAP is compliant with Internet Protocol version 6 ( IPv6 ), the generation of the Internet Protocol. LDIFv1 Support - OpenLDAP is fully compliant with LDIF version 1. Updated C API - The current C API improves the way programmers can connect to and use LDAP directory servers. Enhanced Standalone LDAP Server - This includes an updated access control system, thread pooling, better tools, and much more. 20.1.1.3. OpenLDAP Server Setup The typical steps to set up an LDAP server on Red Hat Enterprise Linux are as follows: Install the OpenLDAP suite. See Section 20.1.2, "Installing the OpenLDAP Suite" for more information on required packages. Customize the configuration as described in Section 20.1.3, "Configuring an OpenLDAP Server" . Start the slapd service as described in Section 20.1.4, "Running an OpenLDAP Server" . Use the ldapadd utility to add entries to the LDAP directory. Use the ldapsearch utility to verify that the slapd service is accessing the information correctly. 20.1.2. Installing the OpenLDAP Suite The suite of OpenLDAP libraries and tools is provided by the following packages: Table 20.1. List of OpenLDAP packages Package Description openldap A package containing the libraries necessary to run the OpenLDAP server and client applications. openldap-clients A package containing the command-line utilities for viewing and modifying directories on an LDAP server. openldap-servers A package containing both the services and utilities to configure and run an LDAP server. This includes the Standalone LDAP Daemon , slapd . compat-openldap A package containing the OpenLDAP compatibility libraries. Additionally, the following packages are commonly used along with the LDAP server: Table 20.2. List of commonly installed additional LDAP packages Package Description sssd A package containing the System Security Services Daemon (SSSD) , a set of daemons to manage access to remote directories and authentication mechanisms. It provides the Name Service Switch (NSS) and the Pluggable Authentication Modules (PAM) interfaces toward the system and a pluggable back-end system to connect to multiple different account sources. mod_authz_ldap A package containing mod_authz_ldap , the LDAP authorization module for the Apache HTTP Server. This module uses the short form of the distinguished name for a subject and the issuer of the client SSL certificate to determine the distinguished name of the user within an LDAP directory. It is also capable of authorizing users based on attributes of that user's LDAP directory entry, determining access to assets based on the user and group privileges of the asset, and denying access for users with expired passwords. Note that the mod_ssl module is required when using the mod_authz_ldap module. To install these packages, use the yum command in the following form: For example, to perform the basic LDAP server installation, type the following at a shell prompt: Note that you must have superuser privileges (that is, you must be logged in as root ) to run this command. For more information on how to install new packages in Red Hat Enterprise Linux, see Section 8.2.4, "Installing Packages" . 20.1.2.1. Overview of OpenLDAP Server Utilities To perform administrative tasks, the openldap-servers package installs the following utilities along with the slapd service: Table 20.3. List of OpenLDAP server utilities Command Description slapacl Allows you to check the access to a list of attributes. slapadd Allows you to add entries from an LDIF file to an LDAP directory. slapauth Allows you to check a list of IDs for authentication and authorization permissions. slapcat Allows you to pull entries from an LDAP directory in the default format and save them in an LDIF file. slapdn Allows you to check a list of Distinguished Names (DNs) based on available schema syntax. slapindex Allows you to re-index the slapd directory based on the current content. Run this utility whenever you change indexing options in the configuration file. slappasswd Allows you to create an encrypted user password to be used with the ldapmodify utility, or in the slapd configuration file. slapschema Allows you to check the compliance of a database with the corresponding schema. slaptest Allows you to check the LDAP server configuration. For a detailed description of these utilities and their usage, see the corresponding manual pages as referred to in Section 20.1.6.1, "Installed Documentation" . Important Although only root can run slapadd , the slapd service runs as the ldap user. Because of this, the directory server is unable to modify any files created by slapadd . To correct this issue, after running the slapd utility, type the following at a shell prompt: Warning To preserve the data integrity, stop the slapd service before using slapadd , slapcat , or slapindex . You can do so by typing the following at a shell prompt: For more information on how to start, stop, restart, and check the current status of the slapd service, see Section 20.1.4, "Running an OpenLDAP Server" . 20.1.2.2. Overview of OpenLDAP Client Utilities The openldap-clients package installs the following utilities which can be used to add, modify, and delete entries in an LDAP directory: Table 20.4. List of OpenLDAP client utilities Command Description ldapadd Allows you to add entries to an LDAP directory, either from a file, or from standard input. It is a symbolic link to ldapmodify -a . ldapcompare Allows you to compare given attribute with an LDAP directory entry. ldapdelete Allows you to delete entries from an LDAP directory. ldapexop Allows you to perform extended LDAP operations. ldapmodify Allows you to modify entries in an LDAP directory, either from a file, or from standard input. ldapmodrdn Allows you to modify the RDN value of an LDAP directory entry. ldappasswd Allows you to set or change the password for an LDAP user. ldapsearch Allows you to search LDAP directory entries. ldapurl Allows you to compose or decompose LDAP URLs. ldapwhoami Allows you to perform a whoami operation on an LDAP server. With the exception of ldapsearch , each of these utilities is more easily used by referencing a file containing the changes to be made rather than typing a command for each entry to be changed within an LDAP directory. The format of such a file is outlined in the man page for each utility. 20.1.2.3. Overview of Common LDAP Client Applications Although there are various graphical LDAP clients capable of creating and modifying directories on the server, none of them is included in Red Hat Enterprise Linux. Popular applications that can access directories in a read-only mode include Mozilla Thunderbird , Evolution , or Ekiga . 20.1.3. Configuring an OpenLDAP Server By default, OpenLDAP stores its configuration in the /etc/openldap/ directory. Table 20.5, "List of OpenLDAP configuration files and directories" highlights the most important files and directories within this directory. Table 20.5. List of OpenLDAP configuration files and directories Path Description /etc/openldap/ldap.conf The configuration file for client applications that use the OpenLDAP libraries. This includes ldapadd , ldapsearch , Evolution , etc. /etc/openldap/slapd.d/ The directory containing the slapd configuration. In Red Hat Enterprise Linux 6, the slapd service uses a configuration database located in the /etc/openldap/slapd.d/ directory and only reads the old /etc/openldap/slapd.conf configuration file if this directory does not exist. If you have an existing slapd.conf file from a installation, you can either wait for the openldap-servers package to convert it to the new format the time you update this package, or type the following at a shell prompt as root to convert it immediately: The slapd configuration consists of LDIF entries organized in a hierarchical directory structure, and the recommended way to edit these entries is to use the server utilities described in Section 20.1.2.1, "Overview of OpenLDAP Server Utilities" . Important An error in an LDIF file can render the slapd service unable to start. Because of this, it is strongly advised that you avoid editing the LDIF files within the /etc/openldap/slapd.d/ directory directly. 20.1.3.1. Changing the Global Configuration Global configuration options for the LDAP server are stored in the /etc/openldap/slapd.d/cn=config.ldif file. The following directives are commonly used: olcAllows The olcAllows directive allows you to specify which features to enable. It takes the following form: It accepts a space-separated list of features as described in Table 20.6, "Available olcAllows options" . The default option is bind_v2 . Table 20.6. Available olcAllows options Option Description bind_v2 Enables the acceptance of LDAP version 2 bind requests. bind_anon_cred Enables an anonymous bind when the Distinguished Name (DN) is empty. bind_anon_dn Enables an anonymous bind when the Distinguished Name (DN) is not empty. update_anon Enables processing of anonymous update operations. proxy_authz_anon Enables processing of anonymous proxy authorization control. Example 20.1. Using the olcAllows directive olcConnMaxPending The olcConnMaxPending directive allows you to specify the maximum number of pending requests for an anonymous session. It takes the following form: The default option is 100 . Example 20.2. Using the olcConnMaxPending directive olcConnMaxPendingAuth The olcConnMaxPendingAuth directive allows you to specify the maximum number of pending requests for an authenticated session. It takes the following form: The default option is 1000 . Example 20.3. Using the olcConnMaxPendingAuth directive olcDisallows The olcDisallows directive allows you to specify which features to disable. It takes the following form: It accepts a space-separated list of features as described in Table 20.7, "Available olcDisallows options" . No features are disabled by default. Table 20.7. Available olcDisallows options Option Description bind_anon Disables the acceptance of anonymous bind requests. bind_simple Disables the simple bind authentication mechanism. tls_2_anon Disables the enforcing of an anonymous session when the STARTTLS command is received. tls_authc Disallows the STARTTLS command when authenticated. Example 20.4. Using the olcDisallows directive olcIdleTimeout The olcIdleTimeout directive allows you to specify how many seconds to wait before closing an idle connection. It takes the following form: This option is disabled by default (that is, set to 0 ). Example 20.5. Using the olcIdleTimeout directive olcLogFile The olcLogFile directive allows you to specify a file in which to write log messages. It takes the following form: The log messages are written to standard error by default. Example 20.6. Using the olcLogFile directive olcReferral The olcReferral option allows you to specify a URL of a server to process the request in case the server is not able to handle it. It takes the following form: This option is disabled by default. Example 20.7. Using the olcReferral directive olcWriteTimeout The olcWriteTimeout option allows you to specify how many seconds to wait before closing a connection with an outstanding write request. It takes the following form: This option is disabled by default (that is, set to 0 ). Example 20.8. Using the olcWriteTimeout directive 20.1.3.2. Changing the Database-Specific Configuration By default, the OpenLDAP server uses Berkeley DB (BDB) as a database back end. The configuration for this database is stored in the /etc/openldap/slapd.d/cn=config/olcDatabase={2}bdb.ldif file. The following directives are commonly used in a database-specific configuration: olcReadOnly The olcReadOnly directive allows you to use the database in a read-only mode. It takes the following form: It accepts either TRUE (enable the read-only mode), or FALSE (enable modifications of the database). The default option is FALSE . Example 20.9. Using the olcReadOnly directive olcRootDN The olcRootDN directive allows you to specify the user that is unrestricted by access controls or administrative limit parameters set for operations on the LDAP directory. It takes the following form: It accepts a Distinguished Name ( DN ). The default option is cn=Manager,dc=my-domain,dc=com . Example 20.10. Using the olcRootDN directive olcRootPW The olcRootPW directive allows you to set a password for the user that is specified using the olcRootDN directive. It takes the following form: It accepts either a plain text string, or a hash. To generate a hash, type the following at a shell prompt: Example 20.11. Using the olcRootPW directive olcSuffix The olcSuffix directive allows you to specify the domain for which to provide information. It takes the following form: It accepts a fully qualified domain name ( FQDN ). The default option is dc=my-domain,dc=com . Example 20.12. Using the olcSuffix directive 20.1.3.3. Extending Schema Since OpenLDAP 2.3, the /etc/openldap/slapd.d/cn=config/cn=schema/ directory also contains LDAP definitions that were previously located in /etc/openldap/schema/ . It is possible to extend the schema used by OpenLDAP to support additional attribute types and object classes using the default schema files as a guide. However, this task is beyond the scope of this chapter. For more information on this topic, see http://www.openldap.org/doc/admin/schema.html . 20.1.4. Running an OpenLDAP Server This section describes how to start, stop, restart, and check the current status of the Standalone LDAP Daemon . For more information on how to manage system services in general, see Chapter 12, Services and Daemons . 20.1.4.1. Starting the Service To run the slapd service, type the following at a shell prompt: If you want the service to start automatically at the boot time, use the following command: Note that you can also use the Service Configuration utility as described in Section 12.2.1.1, "Enabling and Disabling a Service" . 20.1.4.2. Stopping the Service To stop the running slapd service, type the following at a shell prompt: To prevent the service from starting automatically at the boot time, type: Alternatively, you can use the Service Configuration utility as described in Section 12.2.1.1, "Enabling and Disabling a Service" . 20.1.4.3. Restarting the Service To restart the running slapd service, type the following at a shell prompt: This stops the service, and then starts it again. Use this command to reload the configuration. 20.1.4.4. Checking the Service Status To check whether the service is running, type the following at a shell prompt: 20.1.5. Configuring a System to Authenticate Using OpenLDAP In order to configure a system to authenticate using OpenLDAP, make sure that the appropriate packages are installed on both LDAP server and client machines. For information on how to set up the server, follow the instructions in Section 20.1.2, "Installing the OpenLDAP Suite" and Section 20.1.3, "Configuring an OpenLDAP Server" . On a client, type the following at a shell prompt: Chapter 13, Configuring Authentication provides detailed instructions on how to configure applications to use LDAP for authentication. 20.1.5.1. Migrating Old Authentication Information to LDAP Format The migrationtools package provides a set of shell and Perl scripts to help you migrate authentication information into an LDAP format. To install this package, type the following at a shell prompt: This will install the scripts to the /usr/share/migrationtools/ directory. Once installed, edit the /usr/share/migrationtools/migrate_common.ph file and change the following lines to reflect the correct domain, for example: Alternatively, you can specify the environment variables directly on the command line. For example, to run the migrate_all_online.sh script with the default base set to dc=example,dc=com , type: To decide which script to run in order to migrate the user database, see Table 20.8, "Commonly used LDAP migration scripts" . Table 20.8. Commonly used LDAP migration scripts Existing Name Service Is LDAP Running? Script to Use /etc flat files yes migrate_all_online.sh /etc flat files no migrate_all_offline.sh NetInfo yes migrate_all_netinfo_online.sh NetInfo no migrate_all_netinfo_offline.sh NIS (YP) yes migrate_all_nis_online.sh NIS (YP) no migrate_all_nis_offline.sh For more information on how to use these scripts, see the README and the migration-tools.txt files in the /usr/share/doc/migrationtools- version / directory. 20.1.6. Additional Resources The following resources offer additional information on the Lightweight Directory Access Protocol. Before configuring LDAP on your system, it is highly recommended that you review these resources, especially the OpenLDAP Software Administrator's Guide . 20.1.6.1. Installed Documentation The following documentation is installed with the openldap-servers package: /usr/share/doc/openldap-servers- version /guide.html A copy of the OpenLDAP Software Administrator's Guide . /usr/share/doc/openldap-servers- version /README.schema A README file containing the description of installed schema files. Additionally, there is also a number of manual pages that are installed with the openldap , openldap-servers , and openldap-clients packages: Client Applications man ldapadd - Describes how to add entries to an LDAP directory. man ldapdelete - Describes how to delete entries within an LDAP directory. man ldapmodify - Describes how to modify entries within an LDAP directory. man ldapsearch - Describes how to search for entries within an LDAP directory. man ldappasswd - Describes how to set or change the password of an LDAP user. man ldapcompare - Describes how to use the ldapcompare tool. man ldapwhoami - Describes how to use the ldapwhoami tool. man ldapmodrdn - Describes how to modify the RDNs of entries. Server Applications man slapd - Describes command-line options for the LDAP server. Administrative Applications man slapadd - Describes command-line options used to add entries to a slapd database. man slapcat - Describes command-line options used to generate an LDIF file from a slapd database. man slapindex - Describes command-line options used to regenerate an index based upon the contents of a slapd database. man slappasswd - Describes command-line options used to generate user passwords for LDAP directories. Configuration Files man ldap.conf - Describes the format and options available within the configuration file for LDAP clients. man slapd-config - Describes the format and options available within the configuration directory. 20.1.6.2. Useful Websites http://www.openldap.org/doc/admin24/ The current version of the OpenLDAP Software Administrator's Guide . 20.1.6.3. Related Books OpenLDAP by Example by John Terpstra and Benjamin Coles; Prentice Hall. A collection of practical exercises in the OpenLDAP deployment. Implementing LDAP by Mark Wilcox; Wrox Press, Inc. A book covering LDAP from both the system administrator's and software developer's perspective. Understanding and Deploying LDAP Directory Services by Tim Howes et al.; Macmillan Technical Publishing. A book covering LDAP design principles, as well as its deployment in a production environment.
[ "[ id ] dn: distinguished_name attribute_type : attribute_value attribute_type : attribute_value", "install package", "~]# yum install openldap openldap-clients openldap-servers", "~]# chown -R ldap:ldap /var/lib/ldap", "~]# service slapd stop Stopping slapd: [ OK ]", "~]# slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/", "olcAllows : feature", "olcAllows: bind_v2 update_anon", "olcConnMaxPending : number", "olcConnMaxPending: 100", "olcConnMaxPendingAuth : number", "olcConnMaxPendingAuth: 1000", "olcDisallows : feature", "olcDisallows: bind_anon", "olcIdleTimeout : number", "olcIdleTimeout: 180", "olcLogFile : file_name", "olcLogFile: /var/log/slapd.log", "olcReferral : URL", "olcReferral: ldap://root.openldap.org", "olcWriteTimeout", "olcWriteTimeout: 180", "olcReadOnly : boolean", "olcReadOnly: TRUE", "olcRootDN : distinguished_name", "olcRootDN: cn=root,dc=example,dc=com", "olcRootPW : password", "~]USD slappaswd New password: Re-enter new password: {SSHA}WczWsyPEnMchFf1GRTweq2q7XJcvmSxD", "olcRootPW: {SSHA}WczWsyPEnMchFf1GRTweq2q7XJcvmSxD", "olcSuffix : domain_name", "olcSuffix: dc=example,dc=com", "~]# service slapd start Starting slapd: [ OK ]", "~]# chkconfig slapd on", "~]# service slapd stop Stopping slapd: [ OK ]", "~]# chkconfig slapd off", "~]# service slapd restart Stopping slapd: [ OK ] Starting slapd: [ OK ]", "~]# service slapd status slapd (pid 3672) is running", "~]# yum install openldap openldap-clients sssd", "~]# yum install migrationtools", "Default DNS domain USDDEFAULT_MAIL_DOMAIN = \"example.com\"; Default base USDDEFAULT_BASE = \"dc=example,dc=com\";", "~]# export DEFAULT_BASE=\"dc=example,dc=com\" /usr/share/migrationtools/migrate_all_online.sh" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-Directory_Servers
1.3.3. Fencing
1.3.3. Fencing Fencing is the disconnection of a node from the cluster's shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity. The cluster infrastructure performs fencing through one of the following programs according to the type of cluster manager and lock manager that is configured: Configured with CMAN/DLM - fenced , the fence daemon, performs fencing. Configured with GULM servers - GULM performs fencing. When the cluster manager determines that a node has failed, it communicates to other cluster-infrastructure components that the node has failed. The fencing program (either fenced or GULM), when notified of the failure, fences the failed node. Other cluster-infrastructure components determine what actions to take - that is, they perform any recovery that needs to done. For example, DLM and GFS (in a cluster configured with CMAN/DLM), when notified of a node failure, suspend activity until they detect that the fencing program has completed fencing the failed node. Upon confirmation that the failed node is fenced, DLM and GFS perform recovery. DLM releases locks of the failed node; GFS recovers the journal of the failed node. The fencing program determines from the cluster configuration file which fencing method to use. Two key elements in the cluster configuration file define a fencing method: fencing agent and fencing device. The fencing program makes a call to a fencing agent specified in the cluster configuration file. The fencing agent, in turn, fences the node via a fencing device. When fencing is complete, the fencing program notifies the cluster manager. Red Hat Cluster Suite provides a variety of fencing methods: Power fencing - A fencing method that uses a power controller to power off an inoperable node Fibre Channel switch fencing - A fencing method that disables the Fibre Channel port that connects storage to an inoperable node GNBD fencing - A fencing method that disables an inoperable node's access to a GNBD server Other fencing - Several other fencing methods that disable I/O or power of an inoperable node, including IBM Bladecenters, PAP, DRAC/MC, HP ILO, IPMI, IBM RSA II, and others Figure 1.4, "Power Fencing Example" shows an example of power fencing. In the example, the fencing program in node A causes the power controller to power off node D. Figure 1.5, "Fibre Channel Switch Fencing Example" shows an example of Fibre Channel switch fencing. In the example, the fencing program in node A causes the Fibre Channel switch to disable the port for node D, disconnecting node D from storage. Figure 1.4. Power Fencing Example Figure 1.5. Fibre Channel Switch Fencing Example Specifying a fencing method consists of editing a cluster configuration file to assign a fencing-method name, the fencing agent, and the fencing device for each node in the cluster. Note Other fencing parameters may be necessary depending on the type of cluster manager (either CMAN or GULM) selected in a cluster. The way in which a fencing method is specified depends on if a node has either dual power supplies or multiple paths to storage. If a node has dual power supplies, then the fencing method for the node must specify at least two fencing devices - one fencing device for each power supply (refer to Figure 1.6, "Fencing a Node with Dual Power Supplies" ). Similarly, if a node has multiple paths to Fibre Channel storage, then the fencing method for the node must specify one fencing device for each path to Fibre Channel storage. For example, if a node has two paths to Fibre Channel storage, the fencing method should specify two fencing devices - one for each path to Fibre Channel storage (refer to Figure 1.7, "Fencing a Node with Dual Fibre Channel Connections" ). Figure 1.6. Fencing a Node with Dual Power Supplies Figure 1.7. Fencing a Node with Dual Fibre Channel Connections You can configure a node with one fencing method or multiple fencing methods. When you configure a node for one fencing method, that is the only fencing method available for fencing that node. When you configure a node for multiple fencing methods, the fencing methods are cascaded from one fencing method to another according to the order of the fencing methods specified in the cluster configuration file. If a node fails, it is fenced using the first fencing method specified in the cluster configuration file for that node. If the first fencing method is not successful, the fencing method specified for that node is used. If none of the fencing methods is successful, then fencing starts again with the first fencing method specified, and continues looping through the fencing methods in the order specified in the cluster configuration file until the node has been fenced.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s2-fencing-overview-CSO
Chapter 52. policy
Chapter 52. policy This chapter describes the commands under the policy command. 52.1. policy create Create new policy Usage: Table 52.1. Positional arguments Value Summary <filename> New serialized policy rules file Table 52.2. Command arguments Value Summary -h, --help Show this help message and exit --type <type> New mime type of the policy rules file (defaults to application/json) Table 52.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.2. policy delete Delete policy(s) Usage: Table 52.7. Positional arguments Value Summary <policy> Policy(s) to delete Table 52.8. Command arguments Value Summary -h, --help Show this help message and exit 52.3. policy list List policies Usage: Table 52.9. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 52.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.4. policy set Set policy properties Usage: Table 52.14. Positional arguments Value Summary <policy> Policy to modify Table 52.15. Command arguments Value Summary -h, --help Show this help message and exit --type <type> New mime type of the policy rules file --rules <filename> New serialized policy rules file 52.5. policy show Display policy details Usage: Table 52.16. Positional arguments Value Summary <policy> Policy to display Table 52.17. Command arguments Value Summary -h, --help Show this help message and exit Table 52.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack policy create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--type <type>] <filename>", "openstack policy delete [-h] <policy> [<policy> ...]", "openstack policy list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long]", "openstack policy set [-h] [--type <type>] [--rules <filename>] <policy>", "openstack policy show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <policy>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/policy
Chapter 3. ControllerConfig [machineconfiguration.openshift.io/v1]
Chapter 3. ControllerConfig [machineconfiguration.openshift.io/v1] Description ControllerConfig describes configuration for MachineConfigController. This is currently only used to drive the MachineConfig objects generated by the TemplateController. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ControllerConfigSpec is the spec for ControllerConfig resource. status object ControllerConfigStatus is the status for ControllerConfig 3.1.1. .spec Description ControllerConfigSpec is the spec for ControllerConfig resource. Type object Required baseOSContainerImage cloudProviderConfig clusterDNSIP images ipFamilies kubeAPIServerServingCAData releaseImage rootCAData Property Type Description additionalTrustBundle `` additionalTrustBundle is a certificate bundle that will be added to the nodes trusted certificate store. baseOSContainerImage string BaseOSContainerImage is the new-format container image for operating system updates. baseOSExtensionsContainerImage string BaseOSExtensionsContainerImage is the matching extensions container for the new-format container cloudProviderCAData `` cloudProvider specifies the cloud provider CA data cloudProviderConfig string cloudProviderConfig is the configuration for the given cloud provider clusterDNSIP string clusterDNSIP is the cluster DNS IP address dns object dns holds the cluster dns details etcdDiscoveryDomain string etcdDiscoveryDomain is deprecated, use Infra.Status.EtcdDiscoveryDomain instead imageRegistryBundleData array imageRegistryBundleData is the ImageRegistryData imageRegistryBundleData[] object ImageRegistryBundle contains information for writing image registry certificates imageRegistryBundleUserData array imageRegistryBundleUserData is Image Registry Data provided by the user imageRegistryBundleUserData[] object ImageRegistryBundle contains information for writing image registry certificates images object (string) images is map of images that are used by the controller to render templates under ./templates/ infra object infra holds the infrastructure details internalRegistryPullSecret `` internalRegistryPullSecret is the pull secret for the internal registry, used by rpm-ostree to pull images from the internal registry if present ipFamilies string ipFamilies indicates the IP families in use by the cluster network kubeAPIServerServingCAData string kubeAPIServerServingCAData managed Kubelet to API Server Cert... Rotated automatically network `` Network contains additional network related information networkType string networkType holds the type of network the cluster is using XXX: this is temporary and will be dropped as soon as possible in favor of a better support to start network related services the proper way. Nobody is also changing this once the cluster is up and running the first time, so, disallow regeneration if this changes. osImageURL string OSImageURL is the old-format container image that contains the OS update payload. platform string platform is deprecated, use Infra.Status.PlatformStatus.Type instead proxy `` proxy holds the current proxy configuration for the nodes pullSecret object pullSecret is the default pull secret that needs to be installed on all machines. releaseImage string releaseImage is the image used when installing the cluster rootCAData string rootCAData specifies the root CA data 3.1.2. .spec.dns Description dns holds the cluster dns details Type object Required spec kind apiVersion Property Type Description apiVersion string apiVersion defines the versioned schema of this representation of an object. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string kind is a string value representing the type of this object. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 3.1.3. .spec.imageRegistryBundleData Description imageRegistryBundleData is the ImageRegistryData Type array 3.1.4. .spec.imageRegistryBundleData[] Description ImageRegistryBundle contains information for writing image registry certificates Type object Required data file Property Type Description data string data holds the contents of the bundle that will be written to the file location file string file holds the name of the file where the bundle will be written to disk 3.1.5. .spec.imageRegistryBundleUserData Description imageRegistryBundleUserData is Image Registry Data provided by the user Type array 3.1.6. .spec.imageRegistryBundleUserData[] Description ImageRegistryBundle contains information for writing image registry certificates Type object Required data file Property Type Description data string data holds the contents of the bundle that will be written to the file location file string file holds the name of the file where the bundle will be written to disk 3.1.7. .spec.infra Description infra holds the infrastructure details Type object Required spec kind apiVersion Property Type Description apiVersion string apiVersion defines the versioned schema of this representation of an object. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string kind is a string value representing the type of this object. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 3.1.8. .spec.pullSecret Description pullSecret is the default pull secret that needs to be installed on all machines. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.1.9. .status Description ControllerConfigStatus is the status for ControllerConfig Type object Property Type Description conditions array conditions represents the latest available observations of current state. conditions[] object ControllerConfigStatusCondition contains condition information for ControllerConfigStatus controllerCertificates array controllerCertificates represents the latest available observations of the automatically rotating certificates in the MCO. controllerCertificates[] object ControllerCertificate contains info about a specific cert. observedGeneration integer observedGeneration represents the generation observed by the controller. 3.1.10. .status.conditions Description conditions represents the latest available observations of current state. Type array 3.1.11. .status.conditions[] Description ControllerConfigStatusCondition contains condition information for ControllerConfigStatus Type object Required status type Property Type Description lastTransitionTime `` lastTransitionTime is the time of the last update to the current status object. message string message provides additional information about the current condition. This is only to be consumed by humans. reason string reason is the reason for the condition's last transition. Reasons are PascalCase status string status of the condition, one of True, False, Unknown. type string type specifies the state of the operator's reconciliation functionality. 3.1.12. .status.controllerCertificates Description controllerCertificates represents the latest available observations of the automatically rotating certificates in the MCO. Type array 3.1.13. .status.controllerCertificates[] Description ControllerCertificate contains info about a specific cert. Type object Required bundleFile signer subject Property Type Description bundleFile string bundleFile is the larger bundle a cert comes from notAfter string notAfter is the upper boundary for validity notBefore string notBefore is the lower boundary for validity signer string signer is the cert Issuer subject string subject is the cert subject 3.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/controllerconfigs DELETE : delete collection of ControllerConfig GET : list objects of kind ControllerConfig POST : create a ControllerConfig /apis/machineconfiguration.openshift.io/v1/controllerconfigs/{name} DELETE : delete a ControllerConfig GET : read the specified ControllerConfig PATCH : partially update the specified ControllerConfig PUT : replace the specified ControllerConfig /apis/machineconfiguration.openshift.io/v1/controllerconfigs/{name}/status GET : read status of the specified ControllerConfig PATCH : partially update status of the specified ControllerConfig PUT : replace status of the specified ControllerConfig 3.2.1. /apis/machineconfiguration.openshift.io/v1/controllerconfigs HTTP method DELETE Description delete collection of ControllerConfig Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ControllerConfig Table 3.2. HTTP responses HTTP code Reponse body 200 - OK ControllerConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a ControllerConfig Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body ControllerConfig schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK ControllerConfig schema 201 - Created ControllerConfig schema 202 - Accepted ControllerConfig schema 401 - Unauthorized Empty 3.2.2. /apis/machineconfiguration.openshift.io/v1/controllerconfigs/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the ControllerConfig HTTP method DELETE Description delete a ControllerConfig Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ControllerConfig Table 3.9. HTTP responses HTTP code Reponse body 200 - OK ControllerConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ControllerConfig Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK ControllerConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ControllerConfig Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body ControllerConfig schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK ControllerConfig schema 201 - Created ControllerConfig schema 401 - Unauthorized Empty 3.2.3. /apis/machineconfiguration.openshift.io/v1/controllerconfigs/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the ControllerConfig HTTP method GET Description read status of the specified ControllerConfig Table 3.16. HTTP responses HTTP code Reponse body 200 - OK ControllerConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ControllerConfig Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK ControllerConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ControllerConfig Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body ControllerConfig schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK ControllerConfig schema 201 - Created ControllerConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_apis/controllerconfig-machineconfiguration-openshift-io-v1
Deploying a Distributed Compute Node (DCN) architecture
Deploying a Distributed Compute Node (DCN) architecture Red Hat OpenStack Services on OpenShift 18.0 Edge and storage configuration for Red Hat OpenStack Services on Openshift OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_distributed_compute_node_dcn_architecture/index
Chapter 8. Volume size overrides
Chapter 8. Volume size overrides You can specify the desired size of storage resources provisioned for managed components. The default size for Clair and the PostgreSQL databases is 50Gi . You can now choose a large enough capacity upfront, either for performance reasons or in the case where your storage backend does not have resize capability. In the following example, the volume size for the Clair and the Quay PostgreSQL databases has been set to 70Gi : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay-example namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clair managed: true overrides: volumeSize: 70Gi - kind: postgres managed: true overrides: volumeSize: 70Gi - kind: clairpostgres managed: true Note The volume size of the clairpostgres component cannot be overridden. To override the clairpostgres component, you must override the clair component. This is a known issue and will be fixed in a future version of Red Hat Quay. ( PROJQUAY-4301 )
[ "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay-example namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clair managed: true overrides: volumeSize: 70Gi - kind: postgres managed: true overrides: volumeSize: 70Gi - kind: clairpostgres managed: true" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_operator_features/operator-volume-size-overrides
Chapter 10. Replicating audit data in a JMS message broker
Chapter 10. Replicating audit data in a JMS message broker You can replicate KIE Server audit data to a Java Message Service (JMS) message broker, for example ActiveMQ or Artemis, and then dump the data in an external database schema so that you can improve the performance of your Spring Boot application by deleting the audit data from your application schema. If you configure your application to replicate data in a message broker, when an event occurs in KIE Server the record of that event is stored in the KIE Server database schema and it is sent to the message broker. You can then configure an external service to consume the message broker data into an exact replica of the application's database schema. The data is appended in the message broker and the external database every time an event is produce by KIE Server. Note Only audit data is stored in the message broker. No other data is replicated. Prerequisites You have an existing Red Hat Process Automation Manager Spring Boot project. Procedure Open the Spring Boot application's pom.xml file in a text editor. Add the KIE Server Spring Boot audit dependency to the pom.xml file: <dependency> <groupId>org.kie</groupId> <artifactId>kie-server-spring-boot-autoconfiguration-audit-replication</artifactId> <version>USD{version.org.kie}</version> </dependency> Add the dependency for your JMS client. The following example adds the Advanced Message Queuing Protocol (AMQP) dependency: <dependency> <groupId>org.amqphub.spring</groupId> <artifactId>amqp-10-jms-spring-boot-starter</artifactId> <version>2.2.6</version> </dependency> Add the JMS pool dependency: <dependency> <groupId>org.messaginghub</groupId> <artifactId>pooled-jms</artifactId> </dependency> To configure KIE Server audit replication to use queues, complete the following tasks: Add the following lines to your Spring Boot application's application.properties file: Add the properties required for your message broker client. The following example shows how to configure KIE Server for AMPQ, where <JMS_HOST_PORT> is the port that the broker listens on and <USERNAME> and <PASSWORD are the login credentials for the broker: Add the following lines to the application.properties file of the service that will consume the message broker data: Add the properties required for your message broker client to the application.properties file of the service that will consume the message broker data. The following example shows how to configure KIE Server for AMPQ, where <JMS_HOST_PORT> is the port that your message broker listens on and <USERNAME> and <PASSWORD> are the login credentials for the message broker: To configure KIE Server audit replication to use topics, complete the following tasks: Add the following lines to your Spring Boot application's application.properties file: Add the properties required for your message broker client to the application.properties file of the service that will consume the message broker data. The following example shows how to configure KIE Server for AMPQ, where <JMS_HOST_PORT> is the port that your message broker listens on and <USERNAME> and <PASSWORD are the login credentials for the message broker: Add the following lines to the application.properties file of the service that will consume the message broker data: Add the properties required for your message broker client to the application.properties file of the service that will consume the message broker data. The following example shows how to configure KIE Server for AMPQ, where <JMS_HOST_PORT> is the port that your message broker listens on and <USERNAME> and <PASSWORD> are the login credentials for the message broker: Optional: To configure the KIE Server that contains the replicated data to be read only, set the org.kie.server.rest.mode.readonly property in the application.properties file to true : Additional resources Section 10.1, "Spring Boot JMS audit replication parameters" 10.1. Spring Boot JMS audit replication parameters The following table describes the parameters used to configure JMS audit replication for Red Hat Process Automation Manager applications on Spring Boot. Table 10.1. Spring Boot JMS audit replication parameters Parameter Values Description kieserver.audit-replication.producer true, false Specifies whether the business application will act as a producer to replicate and send the JMS messages to either a queue or a topic. kieserver.audit-replication.consumer true, false Specifies whether the business application will act as a consumer to receive the JMS messages from either a queue or a topic. kieserver.audit-replication.queue string The name of the JMS queue to either send or consume messages. kieserver.audit-replication.topic string The name of the JMS topic to either send or consume messages. kieserver.audit-replication.topic.subscriber string The name of the topic subscriber. org.kie.server.rest.mode.readonly true, false Specifies read only mode for the business application.
[ "<dependency> <groupId>org.kie</groupId> <artifactId>kie-server-spring-boot-autoconfiguration-audit-replication</artifactId> <version>USD{version.org.kie}</version> </dependency>", "<dependency> <groupId>org.amqphub.spring</groupId> <artifactId>amqp-10-jms-spring-boot-starter</artifactId> <version>2.2.6</version> </dependency>", "<dependency> <groupId>org.messaginghub</groupId> <artifactId>pooled-jms</artifactId> </dependency>", "kieserver.audit-replication.producer=true kieserver.audit-replication.queue=audit-queue", "amqphub.amqp10jms.remote-url=amqp://<JMS_HOST_PORT> amqphub.amqp10jms.username=<USERNAME> amqphub.amqp10jms.password=<PASSWORD> amqphub.amqp10jms.pool.enabled=true", "kieserver.audit-replication.consumer=true kieserver.audit-replication.queue=audit-queue", "amqphub.amqp10jms.remote-url=amqp://<JMS_HOST_PORT> amqphub.amqp10jms.username=<USERNAME> amqphub.amqp10jms.password=<PASSWORD> amqphub.amqp10jms.pool.enabled=true", "kieserver.audit-replication.producer=true kieserver.audit-replication.topic=audit-topic", "spring.jms.pub-sub-domain=true amqphub.amqp10jms.remote-url=amqp://<JMS_HOST_PORT> amqphub.amqp10jms.username=<USERNAME> amqphub.amqp10jms.password=<PASSWORD> amqphub.amqp10jms.pool.enabled=true", "kieserver.audit-replication.consumer=true kieserver.audit-replication.topic=audit-topic::jbpm kieserver.audit-replication.topic.subscriber=jbpm spring.jms.pub-sub-domain=true", "amqphub.amqp10jms.remote-url=amqp://<JMS_HOST_PORT> amqphub.amqp10jms.username=<USERNAME> amqphub.amqp10jms.password=<PASSWORD> amqphub.amqp10jms.pool.enabled=true amqphub.amqp10jms.clientId=jbpm", "org.kie.server.rest.mode.readonly=true" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/spring-boot-jms-audit-proc_business-applications
Chapter 21. PostgreSQL
Chapter 21. PostgreSQL PostgreSQL is an Object-Relational database management system (DBMS). [19] In Red Hat Enterprise Linux, the postgresql-server package provides PostgreSQL. Enter the following command to see if the postgresql-server package is installed: If it is not installed, use the yum utility as root to install it: 21.1. PostgreSQL and SELinux When PostgreSQL is enabled, it runs confined by default. Confined processes run in their own domains, and are separated from other confined processes. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. The following example demonstrates the PostgreSQL processes running in their own domain. This example assumes the postgresql-server package is installed: Run the getenforce command to confirm SELinux is running in enforcing mode: The command returns Enforcing when SELinux is running in enforcing mode. Enter the following command as the root user to start postgresql : Confirm that the service is running. The output should include the information below (only the time stamp will differ): Enter the following command to view the postgresql processes: The SELinux context associated with the postgresql processes is system_u:system_r:postgresql_t:s0 . The second last part of the context, postgresql_t , is the type. A type defines a domain for processes and a type for files. In this case, the postgresql processes are running in the postgresql_t domain. [19] See the PostgreSQL project page for more information.
[ "~]# rpm -q postgresql-server", "~]# yum install postgresql-server", "~]USD getenforce Enforcing", "~]# systemctl start postgresql.service", "~]# systemctl start postgresql.service postgresql.service - PostgreSQL database server Loaded: loaded (/usr/lib/systemd/system/postgresql.service; disabled) Active: active (running) since Mon 2013-08-05 14:57:49 CEST; 12s", "~]USD ps -eZ | grep postgres system_u:system_r:postgresql_t:s0 395 ? 00:00:00 postmaster system_u:system_r:postgresql_t:s0 397 ? 00:00:00 postmaster system_u:system_r:postgresql_t:s0 399 ? 00:00:00 postmaster system_u:system_r:postgresql_t:s0 400 ? 00:00:00 postmaster system_u:system_r:postgresql_t:s0 401 ? 00:00:00 postmaster system_u:system_r:postgresql_t:s0 402 ? 00:00:00 postmaster" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-postgresql
Preface
Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide helps you to understand the installation requirements and processes behind installing Ansible Automation Platform. This document has been updated to include information for the latest release of Ansible Automation Platform.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/pr01
14.16.2. Learning about the Host Physical Machine CPU Model
14.16.2. Learning about the Host Physical Machine CPU Model The virsh capabilities command displays an XML document describing the capabilities of the hypervisor connection and host physical machine. The XML schema displayed has been extended to provide information about the host physical machine CPU model. One of the big challenges in describing a CPU model is that every architecture has a different approach to exposing their capabilities. On x86, the capabilities of a modern CPU are exposed via the CPUID instruction. Essentially this comes down to a set of 32-bit integers with each bit given a specific meaning. Fortunately AMD and Intel agree on common semantics for these bits. Other hypervisors expose the notion of CPUID masks directly in their guest virtual machine configuration format. However, QEMU/KVM supports far more than just the x86 architecture, so CPUID is clearly not suitable as the canonical configuration format. QEMU ended up using a scheme which combines a CPU model name string, with a set of named options. On x86, the CPU model maps to a baseline CPUID mask, and the options can be used to then toggle bits in the mask on or off. libvirt decided to follow this lead and uses a combination of a model name and options. It is not practical to have a database listing all known CPU models, so libvirt has a small list of baseline CPU model names. It chooses the one that shares the greatest number of CPUID bits with the actual host physical machine CPU and then lists the remaining bits as named features. Notice that libvirt does not display which features the baseline CPU contains. This might seem like a flaw at first, but as will be explained in this section, it is not actually necessary to know this information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-guest_virtual_machine_cpu_model_configuration-learning_about_the_host_physical_machine_cpu_model
Chapter 15. Deleting applications
Chapter 15. Deleting applications You can delete applications created in your project. 15.1. Deleting applications using the Developer perspective You can delete an application and all of its associated components using the Topology view in the Developer perspective: Click the application you want to delete to see the side panel with the resource details of the application. Click the Actions drop-down menu displayed on the upper right of the panel, and select Delete Application to see a confirmation dialog box. Enter the name of the application and click Delete to delete it. You can also right-click the application you want to delete and click Delete Application to delete it.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/building_applications/odc-deleting-applications
5.199. module-init-tools
5.199. module-init-tools 5.199.1. RHBA-2012:0871 - module-init-tools bug fix and enhancement update Updated module-init-tools packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The module-init-tools packages include various programs needed for automatic loading and unloading of modules under 2.6 kernels, as well as other module management programs. Device drivers and file systems are two examples of loaded and unloaded modules. Bug Fixes BZ# 670613 Previously, on low-memory systems (such as low-memory high-performance infrastructure, or HPC, nodes or virtual machines), depmod could use excessive amount of memory. As a consequence, the depmod process was killed by the OOM (out of memory) mechanism, and the system was unable to boot. With this update, the free() function is correctly used on several places in the code so that depmod's memory consumption is reduced. BZ# 673100 Previously, if the "override" keyword was present in the depmod.conf file without any parameters specified, the depmod utility terminated unexpectedly with a segmentation fault. A patch has been applied to ensure that the depmod utility no longer crashes and a syntax warning is displayed instead. Enhancement BZ# 761511 This update adds the "backports" directory to the search path in the depmod.conf file, which is necessary to support integration of the compat-wireless package into kernel packages. All users of module-init-tools are advised to upgrade to these updated packages, which fix these bugs and add this enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/module-init-tools
Chapter 5. Minimum hardware recommendations
Chapter 5. Minimum hardware recommendations Ceph can run on non-proprietary commodity hardware. Small production clusters and development clusters can run without performance optimization with modest hardware. Note Disk space requirements are based on the Ceph daemons' default path under /var/lib/ceph/ directory. Process Criteria Minimum Recommended ceph-osd Processor 1x AMD64 or Intel 64 RAM For BlueStore OSDs, Red Hat typically recommends a baseline of 16 GB of RAM per OSD host, with an additional 5 GB of RAM per daemon. OS Disk 1x OS disk per host Volume Storage 1x storage drive per daemon block.db Optional, but Red Hat recommended, 1x SSD or NVMe or Optane partition or logical volume per daemon. Sizing is 4% of block.data for BlueStore for object, file and mixed workloads and 1% of block.data for the BlueStore for Block Device, Openstack cinder, and Openstack cinder workloads. block.wal Optional, 1x SSD or NVMe or Optane partition or logical volume per daemon. Use a small size, for example 10 GB, and only if it's faster than the block.db device. Network 2x 1GB Ethernet NICs ceph-mon Processor 1x AMD64 or Intel 64 RAM 1 GB per daemon Disk Space 15 GB per daemon (50 GB recommended) Monitor Disk Optionally,1x SSD disk for leveldb monitor data. Network 2x 1 GB Ethernet NICs ceph-mgr Processor 1x AMD64 or Intel 64 RAM 1 GB per daemon Network 2x 1 GB Ethernet NICs ceph-radosgw Processor 1x AMD64 or Intel 64 RAM 1 GB per daemon Disk Space 5 GB per daemon Network 1x 1 GB Ethernet NICs ceph-mds Processor 1x AMD64 or Intel 64 RAM 2 GB per daemon This number is highly dependent on the configurable MDS cache size. The RAM requirement is typically twice as much as the amount set in the mds_cache_memory_limit configuration setting. Note also that this is the memory for your daemon, not the overall system memory. Disk Space 2 MB per daemon, plus any space required for logging, which might vary depending on the configured log levels. Network 2x 1 GB Ethernet NICs Note that this is the same network as the OSDs. If you have a 10 GB network on your OSDs you should use the same on your MDS so that the MDS is not disadvantaged when it comes to latency.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/hardware_guide/minimum-hardware-recommendations_hw
Chapter 2. Configuring a private cluster
Chapter 2. Configuring a private cluster After you install an OpenShift Container Platform version 4.10 cluster, you can set some of its core components to be private. 2.1. About private clusters By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your private cluster. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. DNS If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster's own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for *.apps , for the Ingress object, and api , for the API server. The *.apps records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster. Ingress Controller Because the default Ingress object is created as public, the load balancer is internet-facing and in the public subnets. You can replace the default Ingress Controller with an internal one. API server By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic. On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster's access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz path. On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer. On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster. 2.2. Setting DNS to private After you deploy a cluster, you can modify its DNS to use only a private zone. Procedure Review the DNS custom resource for your cluster: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {} Note that the spec section contains both a private and a public zone. Patch the DNS custom resource to remove the public zone: USD oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patched Because the Ingress Controller consults the DNS definition when it creates Ingress objects, when you create or modify Ingress objects, only private records are created. Important DNS records for the existing Ingress objects are not modified when you remove the public zone. Optional: Review the DNS custom resource for your cluster and confirm that the public zone was removed: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {} 2.3. Setting the Ingress Controller to private After you deploy a cluster, you can modify its Ingress Controller to use only a private zone. Procedure Modify the default Ingress Controller to use only an internal endpoint: USD oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF Example output ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced The public DNS entry is removed, and the private zone entry is updated. 2.4. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for AWS or Azure, take the following actions: Locate and delete appropriate load balancer component. For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer. For Azure, delete the api-internal rule for the load balancer. Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers: Important You can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers. From your terminal, list the cluster machines: USD oc get machine -n openshift-machine-api Example output NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m You modify the control plane machines, which contain master in the name, in the following step. Remove the external load balancer from each control plane machine. Edit a control plane Machine object to remove the reference to the external load balancer: USD oc edit machines -n openshift-machine-api <master_name> 1 1 Specify the name of the control plane, or master, Machine object to modify. Remove the lines that describe the external load balancer, which are marked in the following example, and save and exit the object specification: ... spec: providerSpec: value: ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network 1 2 Delete this line. Repeat this process for each of the machines that contains master in the name. 2.4.1. Configuring the Ingress Controller endpoint publishing scope to Internal When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External . Cluster administrators can change an External scoped Ingress Controller to Internal . Prerequisites You installed the oc CLI. Procedure To change an External scoped Ingress Controller to Internal , enter the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}' To check the status of the Ingress Controller, enter the following command: USD oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command: USD oc -n openshift-ingress delete services/router-default If you delete the service, the Ingress Operator recreates it as Internal .
[ "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced", "oc get machine -n openshift-machine-api", "NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m", "oc edit machines -n openshift-machine-api <master_name> 1", "spec: providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"Internal\"}}}}'", "oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml", "oc -n openshift-ingress delete services/router-default" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/post-installation_configuration/configuring-private-cluster
Chapter 10. Understanding and creating service accounts
Chapter 10. Understanding and creating service accounts 10.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods. Applications inside containers to make API calls for discovery purposes. External applications to make API calls for monitoring or integration purposes. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. Each service account automatically contains two secrets: An API token Credentials for the OpenShift Container Registry The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place. 10.2. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none> 10.3. Examples of granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project: USD oc policy add-role-to-user view system:serviceaccount:top-secret:robot Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name> USD oc policy add-role-to-user <role_name> -z <service_account_name> Important If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account. Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name> To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples. For example, to allow all service accounts in all projects to view resources in the my-project project: USD oc policy add-role-to-group view system:serviceaccounts -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts To allow all service accounts in the managers project to edit resources in the my-project project: USD oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers
[ "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>", "oc policy add-role-to-user view system:serviceaccount:top-secret:robot", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret", "oc policy add-role-to-user <role_name> -z <service_account_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>", "oc policy add-role-to-group view system:serviceaccounts -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts", "oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/understanding-and-creating-service-accounts
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_properties/proc-providing-feedback-on-redhat-documentation
Chapter 3. Running distributed workloads
Chapter 3. Running distributed workloads In OpenShift AI, you can run a distributed workload from a notebook or from a pipeline. You can run distributed workloads in a disconnected environment if you can access all of the required software from that environment. For example, you must be able to access a Ray cluster image, and the data sets and Python dependencies used by the workload, from the disconnected environment. 3.1. Running distributed data science workloads from notebooks To run a distributed workload from a notebook, you must configure a Ray cluster. You must also provide environment-specific information such as cluster authentication details. The examples in this section refer to the JupyterLab integrated development environment (IDE). 3.1.1. Downloading the demo notebooks from the CodeFlare SDK The demo notebooks from the CodeFlare SDK provide guidelines on how to use the CodeFlare stack in your own notebooks. Download the demo notebooks so that you can learn how to run the notebooks locally. Prerequisites You can access a data science cluster that is configured to run distributed workloads as described in Managing distributed workloads . You can access a data science project that contains a workbench, and the workbench is running a default notebook image that contains the CodeFlare SDK, for example, the Standard Data Science notebook. For information about projects and workbenches, see Working on data science projects . You have Admin access for the data science project. If you created the project, you automatically have Admin access. If you did not create the project, your cluster administrator must give you Admin access. You have logged in to Red Hat OpenShift AI, started your workbench, and logged in to JupyterLab. Procedure In the JupyterLab interface, click File > New > Notebook . Specify your preferred Python version, and then click Select . A new notebook is created in an .ipynb file. Add the following code to a cell in the new notebook: Code to download the demo notebooks from codeflare_sdk import copy_demo_nbs copy_demo_nbs() Select the cell, and click Run > Run selected cell . After a few seconds, the copy_demo_nbs() function copies the demo notebooks that are packaged with the currently installed version of the CodeFlare SDK, and clones them into the demo-notebooks folder. In the left navigation pane, right-click the new notebook and click Delete . Click Delete to confirm. Verification Locate the downloaded demo notebooks in the JupyterLab interface, as follows: In the left navigation pane, double-click demo-notebooks . Double-click additional-demos and verify that the folder contains several demo notebooks. Click demo-notebooks . Double-click guided-demos and verify that the folder contains several demo notebooks. You can run these demo notebooks as described in Running the demo notebooks from the CodeFlare SDK . 3.1.2. Running the demo notebooks from the CodeFlare SDK To run the demo notebooks from the CodeFlare SDK, you must provide environment-specific information. In the examples in this procedure, you edit the demo notebooks in JupyterLab to provide the required information, and then run the notebooks. Prerequisites You can access a data science cluster that is configured to run distributed workloads as described in Managing distributed workloads . You can access the following software from your data science cluster: A Ray cluster image that is compatible with your hardware architecture The data sets and models to be used by the workload The Python dependencies for the workload, either in a Ray image or in your own Python Package Index (PyPI) server You can access a data science project that contains a workbench, and the workbench is running a default notebook image that contains the CodeFlare SDK, for example, the Standard Data Science notebook. For information about projects and workbenches, see Working on data science projects . You have Admin access for the data science project. If you created the project, you automatically have Admin access. If you did not create the project, your cluster administrator must give you Admin access. You have logged in to Red Hat OpenShift AI, started your workbench, and logged in to JupyterLab. You have downloaded the demo notebooks provided by the CodeFlare SDK, as described in Downloading the demo notebooks from the CodeFlare SDK . Procedure Check whether your cluster administrator has defined a default local queue for the Ray cluster. You can use the codeflare_sdk.list_local_queues() function to view all local queues in your current namespace, and the resource flavors associated with each local queue. Alternatively, you can use the OpenShift web console as follows: In the OpenShift web console, select your project from the Project list. Click Search , and from the Resources list, select LocalQueue to show the list of local queues for your project. If no local queue is listed, contact your cluster administrator. Review the details of each local queue: Click the local queue name. Click the YAML tab, and review the metadata.annotations section. If the kueue.x-k8s.io/default-queue annotation is set to 'true' , the queue is configured as the default local queue. Note If your cluster administrator does not define a default local queue, you must specify a local queue in each notebook. In the JupyterLab interface, open the demo-notebooks > guided-demos folder. Open all of the notebooks by double-clicking each notebook file. Notebook files have the .ipynb file name extension. In each notebook, ensure that the import section imports the required components from the CodeFlare SDK, as follows: Example import section from codeflare_sdk import Cluster, ClusterConfiguration, TokenAuthentication In each notebook, update the TokenAuthentication section to provide the token and server details to authenticate to the OpenShift cluster by using the CodeFlare SDK. You can find your token and server details as follows: In the OpenShift AI top navigation bar, click the application launcher icon ( ) and then click OpenShift Console to open the OpenShift web console. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command . After you have logged in, click Display Token . In the Log in with this token section, find the required values as follows: The token value is the text after the --token= prefix. The server value is the text after the --server= prefix. Note The token and server values are security credentials, treat them with care. Do not save the token and server details in a notebook. Do not store the token and server details in Git. The token expires after 24 hours. Optional: If you want to use custom certificates, update the TokenAuthentication section to add the ca_cert_path parameter to specify the location of the custom certificates, as shown in the following example: Example authentication section auth = TokenAuthentication( token = "XXXXX", server = "XXXXX", skip_tls=False, ca_cert_path="/path/to/cert" ) auth.login() Alternatively, you can set the CF_SDK_CA_CERT_PATH environment variable to specify the location of the custom certificates. In each notebook, update the cluster configuration section as follows: If the namespace value is specified, replace the example value with the name of your project. If you omit this line, the Ray cluster is created in the current project. If the image value is specified, replace the example value with a link to a suitable Ray cluster image. The Python version in the Ray cluster image must be the same as the Python version in the workbench. If you omit this line, one of the following Ray cluster images is used by default, based on the Python version detected in the workbench: Python 3.9: quay.io/modh/ray:2.35.0-py39-cu121 Python 3.11: quay.io/modh/ray:2.35.0-py311-cu121 The default Ray images are compatible with NVIDIA GPUs that are supported by the specified CUDA version. The default images are AMD64 images, which might not work on other architectures. Additional ROCm-compatible Ray cluster images are compatible with AMD accelerators that are supported by the specified ROCm version. These images are AMD64 images, which might not work on other architectures. For information about the latest available training images and their preinstalled packages, including the CUDA and ROCm versions, see Red Hat OpenShift AI: Supported Configurations . If your cluster administrator has not configured a default local queue, specify the local queue for the Ray cluster, as shown in the following example: Example local queue assignment local_queue=" your_local_queue_name " Optional: Assign a dictionary of labels parameters to the Ray cluster for identification and management purposes, as shown in the following example: Example labels assignment labels = {"exampleLabel1": "exampleLabel1Value", "exampleLabel2": "exampleLabel2Value"} In the 2_basic_interactive.ipynb notebook, ensure that the following Ray cluster authentication code is included after the Ray cluster creation section: Ray cluster authentication code from codeflare_sdk import generate_cert generate_cert.generate_tls_cert(cluster.config.name, cluster.config.namespace) generate_cert.export_env(cluster.config.name, cluster.config.namespace) Note Mutual Transport Layer Security (mTLS) is enabled by default in the CodeFlare component in OpenShift AI. You must include the Ray cluster authentication code to enable the Ray client that runs within a notebook to connect to a secure Ray cluster that has mTLS enabled. Run the notebooks in the order indicated by the file-name prefix ( 0_ , 1_ , and so on). In each notebook, run each cell in turn, and review the cell output. If an error is shown, review the output to find information about the problem and the required corrective action. For example, replace any deprecated parameters as instructed. See also Troubleshooting common problems with distributed workloads for users . For more information about the interactive browser controls that you can use to simplify Ray cluster tasks when working within a Jupyter notebook, see Managing Ray clusters from within a Jupyter notebook . Verification The notebooks run to completion without errors. In the notebooks, the output from the cluster.status() function or cluster.details() function indicates that the Ray cluster is Active . 3.1.3. Managing Ray clusters from within a Jupyter notebook You can use interactive browser controls to simplify Ray cluster tasks when working within a Jupyter notebook. The interactive browser controls provide an alternative to the equivalent commands, but do not replace them. You can continue to manage the Ray clusters by running commands within the notebook, for ease of use in scripts and pipelines. Several different interactive browser controls are available: When you run a cell that provides the cluster configuration, the notebook automatically shows the controls for starting or deleting the cluster. You can run the view_clusters() command to add controls that provide the following functionality: View a list of the Ray clusters that you can access. View cluster information, such as cluster status and allocated resources, for the selected Ray cluster. You can view this information from within the notebook, without switching to the OpenShift console or the Ray dashboard. Open the Ray dashboard directly from the notebook, to view the submitted jobs. Refresh the Ray cluster list and the cluster information for the selected cluster. You can add these controls to existing notebooks, or manage the Ray clusters from a separate notebook. The 3_widget_example.ipynb demo notebook shows all of the available interactive browser controls. In the example in this procedure, you create a new notebook to manage the Ray clusters, similar to the example provided in the 3_widget_example.ipynb demo notebook. Prerequisites You can access a data science cluster that is configured to run distributed workloads as described in Managing distributed workloads . You can access the following software from your data science cluster: A Ray cluster image that is compatible with your hardware architecture The data sets and models to be used by the workload The Python dependencies for the workload, either in a Ray image or in your own Python Package Index (PyPI) server You can access a data science project that contains a workbench, and the workbench is running a default notebook image that contains the CodeFlare SDK, for example, the Standard Data Science notebook. For information about projects and workbenches, see Working on data science projects . You have Admin access for the data science project. If you created the project, you automatically have Admin access. If you did not create the project, your cluster administrator must give you Admin access. You have logged in to Red Hat OpenShift AI, started your workbench, and logged in to JupyterLab. You have downloaded the demo notebooks provided by the CodeFlare SDK, as described in Downloading the demo notebooks from the CodeFlare SDK . Procedure Run all of the demo notebooks in the order indicated by the file-name prefix ( 0_ , 1_ , and so on), as described in Running the demo notebooks from the CodeFlare SDK . In each demo notebook, when you run the cluster configuration step, the following interactive controls are automatically shown in the notebook: Cluster Up : You can click this button to start the Ray cluster. This button is equivalent to the cluster.up() command. When you click this button, a message indicates whether the cluster was successfully created. Cluster Down : You can click this button to delete the Ray cluster. This button is equivalent to the cluster.down() command. The cluster is deleted immediately; you are not prompted to confirm the deletion. When you click this button, a message indicates whether the cluster was successfully deleted. Wait for Cluster : You can select this option to specify that the notebook should wait for the Ray cluster dashboard to be ready before proceeding to the step. This option is equivalent to the cluster.wait_ready() command. In the JupyterLab interface, create a new notebook to manage the Ray clusters, as follows: Click File > New > Notebook . Specify your preferred Python version, and then click Select . A new notebook is created in an .ipynb file. Add the following code to a cell in the new notebook: Code to import the required packages from codeflare_sdk import TokenAuthentication, view_clusters The view_clusters package provides the interactive browser controls for listing the clusters, showing the cluster details, opening the Ray dashboard, and refreshing the cluster data. Add a new cell to the notebook, and add the following code to the new cell: Code to authenticate auth = TokenAuthentication( token = "XXXXX", server = "XXXXX", skip_tls=False ) auth.login() For information about how to find the token and server values, see Running the demo notebooks from the CodeFlare SDK . Add a new cell to the notebook, and add the following code to the new cell: Code to view clusters in the current project view_clusters() When you run the view_clusters() command with no arguments specified, you generate a list of all of the Ray clusters in the current project, and display information similar to the cluster.details() function. If you have access to another project, you can list the Ray clusters in that project by specifying the project name as shown in the following example: Code to view clusters in another project view_clusters("my_second_project") Click File > Save Notebook As , enter demo-notebooks/guided-demos/manage_ray_clusters.ipynb , and click Save . In the demo-notebooks/guided-demos/manage_ray_clusters.ipynb notebook, select each cell in turn, and click Run > Run selected cell . When you run the cell with the view_clusters() function, the output depends on whether any Ray clusters exist. If no Ray clusters exist, the following text is shown, where _[project-name]_ is the name of the target project: No clusters found in the [project-name] namespace. Otherwise, the notebook shows the following information about the existing Ray clusters: Select an existing cluster Under this heading, a toggle button is shown for each existing cluster. Click a cluster name to select the cluster. The cluster details section is updated to show details about the selected cluster; for example, cluster name, OpenShift AI project name, cluster resource information, and cluster status. Delete cluster Click this button to delete the selected cluster. This button is equivalent to the Cluster Down button. The cluster is deleted immediately; you are not prompted to confirm the deletion. A message indicates whether the cluster was successfully deleted, and the corresponding button is no longer shown under the Select an existing cluster heading. View Jobs Click this button to open the Jobs tab in the Ray dashboard for the selected cluster, and view details of the submitted jobs. The corresponding URL is shown in the notebook. Open Ray Dashboard Click this button to open the Overview tab in the Ray dashboard for the selected cluster. The corresponding URL is shown in the notebook. Refresh Data Click this button to refresh the list of Ray clusters, and the cluster details for the selected cluster, on demand. The cluster details are automatically refreshed when you select a cluster and when you delete the selected cluster. Verification The demo notebooks run to completion without errors. In the manage_ray_clusters.ipynb notebook, the output from the view_clusters() function is correct. 3.2. Running distributed data science workloads from data science pipelines To run a distributed workload from a pipeline, you must first update the pipeline to include a link to your Ray cluster image. Prerequisites You can access a data science cluster that is configured to run distributed workloads as described in Managing distributed workloads . You can access the following software from your data science cluster: A Ray cluster image that is compatible with your hardware architecture The data sets and models to be used by the workload The Python dependencies for the workload, either in a Ray image or in your own Python Package Index (PyPI) server You can access a data science project that contains a workbench, and the workbench is running a default notebook image that contains the CodeFlare SDK, for example, the Standard Data Science notebook. For information about projects and workbenches, see Working on data science projects . You have Admin access for the data science project. If you created the project, you automatically have Admin access. If you did not create the project, your cluster administrator must give you Admin access. You have access to S3-compatible object storage. You have logged in to Red Hat OpenShift AI. Procedure Create a connection to connect the object storage to your data science project, as described in Adding a connection to your data science project . Configure a pipeline server to use the connection, as described in Configuring a pipeline server . Create the data science pipeline as follows: Install the kfp Python package, which is required for all pipelines: USD pip install kfp Install any other dependencies that are required for your pipeline. Build your data science pipeline in Python code. For example, if you use NVIDIA GPUs, create a file named compile_example.py with the following content: from kfp import dsl @dsl.component( base_image="registry.redhat.io/ubi8/python-39:latest", packages_to_install=['codeflare-sdk'] ) def ray_fn(): import ray 1 from codeflare_sdk import Cluster, ClusterConfiguration, generate_cert 2 # If you do not use NVIDIA GPUs, substitute "nvidia.com/gpu" with the correct value for your accelerator cluster = Cluster( 3 ClusterConfiguration( namespace="my_project", 4 name="raytest", num_workers=1, head_cpus="500m", min_memory=1, max_memory=1, worker_extended_resource_requests={"nvidia.com/gpu": 1}, 5 image="quay.io/modh/ray:2.35.0-py39-cu121", 6 local_queue="local_queue_name", 7 ) ) print(cluster.status()) cluster.up() 8 cluster.wait_ready() 9 print(cluster.status()) print(cluster.details()) ray_dashboard_uri = cluster.cluster_dashboard_uri() ray_cluster_uri = cluster.cluster_uri() print(ray_dashboard_uri, ray_cluster_uri) # Enable Ray client to connect to secure Ray cluster that has mTLS enabled generate_cert.generate_tls_cert(cluster.config.name, cluster.config.namespace) 10 generate_cert.export_env(cluster.config.name, cluster.config.namespace) ray.init(address=ray_cluster_uri) print("Ray cluster is up and running: ", ray.is_initialized()) @ray.remote def train_fn(): 11 # complex training function return 100 result = ray.get(train_fn.remote()) assert 100 == result ray.shutdown() cluster.down() 12 auth.logout() return result @dsl.pipeline( 13 name="Ray Simple Example", description="Ray Simple Example", ) def ray_integration(): ray_fn() if __name__ == '__main__': 14 from kfp.compiler import Compiler Compiler().compile(ray_integration, 'compiled-example.yaml') 1 Imports Ray. 2 Imports packages from the CodeFlare SDK to define the cluster functions. 3 Specifies the Ray cluster configuration: replace these example values with the values for your Ray cluster. 4 Optional: Specifies the project where the Ray cluster is created. Replace the example value with the name of your project. If you omit this line, the Ray cluster is created in the current project. 5 Optional: Specifies the requested accelerators for the Ray cluster (in this example, 1 NVIDIA GPU). If no accelerators are required, set the value to 0 or omit the line. Note: To specify the requested accelerators for the Ray cluster, use the worker_extended_resource_requests parameter instead of the deprecated num_gpus parameter. For more details, see the CodeFlare SDK documentation . 6 Specifies the location of the Ray cluster image. The Python version in the Ray cluster image must be the same as the Python version in the workbench. If you omit this line, one of the default CUDA-compatible Ray cluster images is used, based on the Python version detected in the workbench. The default Ray images are AMD64 images, which might not work on other architectures. If you are running this code in a disconnected environment, replace the default value with the location for your environment. For information about the latest available training images and their preinstalled packages, see Red Hat OpenShift AI: Supported Configurations . 7 Specifies the local queue to which the Ray cluster will be submitted. If a default local queue is configured, you can omit this line. 8 Creates a Ray cluster by using the specified image and configuration. 9 Waits until the Ray cluster is ready before proceeding. 10 Enables the Ray client to connect to a secure Ray cluster that has mutual Transport Layer Security (mTLS) enabled. mTLS is enabled by default in the CodeFlare component in OpenShift AI. 11 Replace the example details in this section with the details for your workload. 12 Removes the Ray cluster when your workload is finished. 13 Replace the example name and description with the values for your workload. 14 Compiles the Python code and saves the output in a YAML file. Compile the Python file (in this example, the compile_example.py file): USD python compile_example.py This command creates a YAML file (in this example, compiled-example.yaml ), which you can import in the step. Import your data science pipeline, as described in Importing a data science pipeline . Schedule the pipeline run, as described in Scheduling a pipeline run . When the pipeline run is complete, confirm that it is included in the list of triggered pipeline runs, as described in Viewing the details of a pipeline run . Verification The YAML file is created and the pipeline run completes without errors. You can view the run details, as described in Viewing the details of a pipeline run . Additional resources Working with data science pipelines Ray Clusters documentation 3.3. Running distributed data science workloads in a disconnected environment To run a distributed data science workload in a disconnected environment, you must be able to access a Ray cluster image, and the data sets and Python dependencies used by the workload, from the disconnected environment. Prerequisites You have logged in to OpenShift with the cluster-admin role. You have access to the disconnected data science cluster. You have installed Red Hat OpenShift AI and created a mirror image as described in Installing and uninstalling OpenShift AI Self-Managed in a disconnected environment . You can access the following software from the disconnected cluster: A Ray cluster image The data sets and models to be used by the workload The Python dependencies for the workload, either in a Ray image or in your own Python Package Index (PyPI) server that is available from the disconnected cluster You have logged in to Red Hat OpenShift AI. You have created a data science project that contains a workbench, and the workbench is running a default notebook image that contains the CodeFlare SDK, for example, the Standard Data Science notebook. For information about how to create a project, see Creating a data science project . You have Admin access for the data science project. If you created the project, you automatically have Admin access. If you did not create the project, your cluster administrator must give you Admin access. Procedure Configure the disconnected data science cluster to run distributed workloads as described in Managing distributed workloads . In the ClusterConfiguration section of the notebook or pipeline, ensure that the image value specifies a Ray cluster image that you can access from the disconnected environment: Notebooks use the Ray cluster image to create a Ray cluster when running the notebook. Pipelines use the Ray cluster image to create a Ray cluster during the pipeline run. If any of the Python packages required by the workload are not available in the Ray cluster, configure the Ray cluster to download the Python packages from a private PyPI server. For example, set the PIP_INDEX_URL and PIP_TRUSTED_HOST environment variables for the Ray cluster, to specify the location of the Python dependencies, as shown in the following example: where PIP_INDEX_URL specifies the base URL of your private PyPI server (the default value is https://pypi.org ). PIP_TRUSTED_HOST configures Python to mark the specified host as trusted, regardless of whether that host has a valid SSL certificate or is using a secure channel. Run the distributed data science workload, as described in Running distributed data science workloads from notebooks or Running distributed data science workloads from data science pipelines . Verification The notebook or pipeline run completes without errors: For notebooks, the output from the cluster.status() function or cluster.details() function indicates that the Ray cluster is Active . For pipeline runs, you can view the run details as described in Viewing the details of a pipeline run . Additional resources Installing and uninstalling Red Hat OpenShift AI in a disconnected environment Ray Clusters documentation
[ "from codeflare_sdk import copy_demo_nbs copy_demo_nbs()", "from codeflare_sdk import Cluster, ClusterConfiguration, TokenAuthentication", "auth = TokenAuthentication( token = \"XXXXX\", server = \"XXXXX\", skip_tls=False, ca_cert_path=\"/path/to/cert\" ) auth.login()", "local_queue=\" your_local_queue_name \"", "labels = {\"exampleLabel1\": \"exampleLabel1Value\", \"exampleLabel2\": \"exampleLabel2Value\"}", "from codeflare_sdk import generate_cert generate_cert.generate_tls_cert(cluster.config.name, cluster.config.namespace) generate_cert.export_env(cluster.config.name, cluster.config.namespace)", "from codeflare_sdk import TokenAuthentication, view_clusters", "auth = TokenAuthentication( token = \"XXXXX\", server = \"XXXXX\", skip_tls=False ) auth.login()", "view_clusters()", "view_clusters(\"my_second_project\")", "No clusters found in the [project-name] namespace.", "pip install kfp", "from kfp import dsl @dsl.component( base_image=\"registry.redhat.io/ubi8/python-39:latest\", packages_to_install=['codeflare-sdk'] ) def ray_fn(): import ray 1 from codeflare_sdk import Cluster, ClusterConfiguration, generate_cert 2 # If you do not use NVIDIA GPUs, substitute \"nvidia.com/gpu\" with the correct value for your accelerator cluster = Cluster( 3 ClusterConfiguration( namespace=\"my_project\", 4 name=\"raytest\", num_workers=1, head_cpus=\"500m\", min_memory=1, max_memory=1, worker_extended_resource_requests={\"nvidia.com/gpu\": 1}, 5 image=\"quay.io/modh/ray:2.35.0-py39-cu121\", 6 local_queue=\"local_queue_name\", 7 ) ) print(cluster.status()) cluster.up() 8 cluster.wait_ready() 9 print(cluster.status()) print(cluster.details()) ray_dashboard_uri = cluster.cluster_dashboard_uri() ray_cluster_uri = cluster.cluster_uri() print(ray_dashboard_uri, ray_cluster_uri) # Enable Ray client to connect to secure Ray cluster that has mTLS enabled generate_cert.generate_tls_cert(cluster.config.name, cluster.config.namespace) 10 generate_cert.export_env(cluster.config.name, cluster.config.namespace) ray.init(address=ray_cluster_uri) print(\"Ray cluster is up and running: \", ray.is_initialized()) @ray.remote def train_fn(): 11 # complex training function return 100 result = ray.get(train_fn.remote()) assert 100 == result ray.shutdown() cluster.down() 12 auth.logout() return result @dsl.pipeline( 13 name=\"Ray Simple Example\", description=\"Ray Simple Example\", ) def ray_integration(): ray_fn() if __name__ == '__main__': 14 from kfp.compiler import Compiler Compiler().compile(ray_integration, 'compiled-example.yaml')", "python compile_example.py", "PIP_INDEX_URL: https://pypi-notebook.apps.mylocation.com/simple PIP_TRUSTED_HOST: pypi-notebook.apps.mylocation.com" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_distributed_workloads/running-distributed-workloads_distributed-workloads
Chapter 30. Managing DNS forwarding in IdM
Chapter 30. Managing DNS forwarding in IdM Follow these procedures to configure DNS global forwarders and DNS forward zones in the Identity Management (IdM) Web UI, the IdM CLI, and using Ansible: The two roles of an IdM DNS server DNS forward policies in IdM Adding a global forwarder in the IdM Web UI Adding a global forwarder in the CLI Adding a DNS Forward Zone in the IdM Web UI Adding a DNS Forward Zone in the CLI Establishing a DNS Global Forwarder in IdM using Ansible Ensuring the presence of a DNS global forwarder in IdM using Ansible Ensuring the absence of a DNS global forwarder in IdM using Ansible Ensuring DNS Global Forwarders are disabled in IdM using Ansible Ensuring the presence of a DNS Forward Zone in IdM using Ansible Ensuring a DNS Forward Zone has multiple forwarders in IdM using Ansible Ensuring a DNS Forward Zone is disabled in IdM using Ansible Ensuring the absence of a DNS Forward Zone in IdM using Ansible 30.1. The two roles of an IdM DNS server DNS forwarding affects how a DNS service answers DNS queries. By default, the Berkeley Internet Name Domain (BIND) service integrated with IdM acts as both an authoritative and a recursive DNS server: Authoritative DNS server When a DNS client queries a name belonging to a DNS zone for which the IdM server is authoritative, BIND replies with data contained in the configured zone. Authoritative data always takes precedence over any other data. Recursive DNS server When a DNS client queries a name for which the IdM server is not authoritative, BIND attempts to resolve the query using other DNS servers. If forwarders are not defined, BIND asks the root servers on the Internet and uses a recursive resolution algorithm to answer the DNS query. In some cases, it is not desirable to let BIND contact other DNS servers directly and perform the recursion based on data available on the Internet. You can configure BIND to use another DNS server, a forwarder , to resolve the query. When you configure BIND to use a forwarder, queries and answers are forwarded back and forth between the IdM server and the forwarder, and the IdM server acts as the DNS cache for non-authoritative data. 30.2. DNS forward policies in IdM IdM supports the first and only standard BIND forward policies, as well as the none IdM-specific forward policy. Forward first (default) The IdM BIND service forwards DNS queries to the configured forwarder. If a query fails because of a server error or timeout, BIND falls back to the recursive resolution using servers on the Internet. The forward first policy is the default policy, and it is suitable for optimizing DNS traffic. Forward only The IdM BIND service forwards DNS queries to the configured forwarder. If a query fails because of a server error or timeout, BIND returns an error to the client. The forward only policy is recommended for environments with split DNS configuration. None (forwarding disabled) DNS queries are not forwarded with the none forwarding policy. Disabling forwarding is only useful as a zone-specific override for global forwarding configuration. This option is the IdM equivalent of specifying an empty list of forwarders in BIND configuration. Note You cannot use forwarding to combine data in IdM with data from other DNS servers. You can only forward queries for specific subzones of the primary zone in IdM DNS. By default, the BIND service does not forward queries to another server if the queried DNS name belongs to a zone for which the IdM server is authoritative. In such a situation, if the queried DNS name cannot be found in the IdM database, the NXDOMAIN answer is returned. Forwarding is not used. Example 30.1. Example Scenario The IdM server is authoritative for the test.example. DNS zone. BIND is configured to forward queries to the DNS server with the 192.0.2.254 IP address. When a client sends a query for the nonexistent.test.example. DNS name, BIND detects that the IdM server is authoritative for the test.example. zone and does not forward the query to the 192.0.2.254. server. As a result, the DNS client receives the NXDomain error message, informing the user that the queried domain does not exist. 30.3. Adding a global forwarder in the IdM Web UI Follow this procedure to add a global DNS forwarder in the Identity Management (IdM) Web UI. Prerequisites You are logged in to the IdM WebUI as IdM administrator. You know the Internet Protocol (IP) address of the DNS server to forward queries to. Procedure In the IdM Web UI, select Network Services DNS Global Configuration DNS . In the DNS Global Configuration section, click Add . Specify the IP address of the DNS server that will receive forwarded DNS queries. Select the Forward policy . Click Save at the top of the window. Verification Select Network Services DNS Global Configuration DNS . Verify that the global forwarder, with the forward policy you specified, is present and enabled in the IdM Web UI. 30.4. Adding a global forwarder in the CLI Follow this procedure to add a global DNS forwarder by using the command line (CLI). Prerequisites You are logged in as IdM administrator. You know the Internet Protocol (IP) address of the DNS server to forward queries to. Procedure Use the ipa dnsconfig-mod command to add a new global forwarder. Specify the IP address of the DNS forwarder with the --forwarder option. Verification Use the dnsconfig-show command to display global forwarders. 30.5. Adding a DNS Forward Zone in the IdM Web UI Follow this procedure to add a DNS forward zone in the Identity Management (IdM) Web UI. Important Do not use forward zones unless absolutely required. Forward zones are not a standard solution, and using them can lead to unexpected and problematic behavior. If you must use forward zones, limit their use to overriding a global forwarding configuration. When creating a new DNS zone, Red Hat recommends to always use standard DNS delegation using nameserver (NS) records and to avoid forward zones. In most cases, using a global forwarder is sufficient, and forward zones are not necessary. Prerequisites You are logged in to the IdM WebUI as IdM administrator. You know the Internet Protocol (IP) address of the DNS server to forward queries to. Procedure In the IdM Web UI, select Network Services DNS Forward Zones DNS . In the DNS Forward Zones section, click Add . In the Add DNS forward zone window, specify the forward zone name. Click the Add button and specify the IP address of a DNS server to receive the forwarding request. You can specify multiple forwarders per forward zone. Select the Forward policy . Click Add at the bottom of the window to add the new forward zone. Verification In the IdM Web UI, select Network Services DNS Forward Zones DNS . Verify that the forward zone you created, with the forwarders and forward policy you specified, is present and enabled in the IdM Web UI. 30.6. Adding a DNS Forward Zone in the CLI Follow this procedure to add a DNS forward zone by using the command line (CLI). Important Do not use forward zones unless absolutely required. Forward zones are not a standard solution, and using them can lead to unexpected and problematic behavior. If you must use forward zones, limit their use to overriding a global forwarding configuration. When creating a new DNS zone, Red Hat recommends to always use standard DNS delegation using nameserver (NS) records and to avoid forward zones. In most cases, using a global forwarder is sufficient, and forward zones are not necessary. Prerequisites You are logged in as IdM administrator. You know the Internet Protocol (IP) address of the DNS server to forward queries to. Procedure Use the dnsforwardzone-add command to add a new forward zone. Specify at least one forwarder with the --forwarder option if the forward policy is not none , and specify the forward policy with the --forward-policy option. Verification Use the dnsforwardzone-show command to display the DNS forward zone you just created. 30.7. Establishing a DNS Global Forwarder in IdM using Ansible Follow this procedure to use an Ansible playbook to establish a DNS Global Forwarder in IdM. In the example procedure below, the IdM administrator creates a DNS global forwarder to a DNS server with an Internet Protocol (IP) v4 address of 8.8.6.6 and IPv6 address of 2001:4860:4860::8800 on port 53 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the set-configuration.yml Ansible playbook file. For example: Open the establish-global-forwarder.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to establish a global forwarder in IdM DNS . In the tasks section, change the name of the task to Create a DNS global forwarder to 8.8.6.6 and 2001:4860:4860::8800 . In the forwarders section of the ipadnsconfig portion: Change the first ip_address value to the IPv4 address of the global forwarder: 8.8.6.6 . Change the second ip_address value to the IPv6 address of the global forwarder: 2001:4860:4860::8800 . Verify the port value is set to 53 . Change the forward_policy to first . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory 30.8. Ensuring the presence of a DNS global forwarder in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure the presence of a DNS global forwarder in IdM. In the example procedure below, the IdM administrator ensures the presence of a DNS global forwarder to a DNS server with an Internet Protocol (IP) v4 address of 7.7.9.9 and IP v6 address of 2001:db8::1:0 on port 53 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-presence-of-a-global-forwarder.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the presence of a global forwarder in IdM DNS . In the tasks section, change the name of the task to Ensure the presence of a DNS global forwarder to 7.7.9.9 and 2001:db8::1:0 on port 53 . In the forwarders section of the ipadnsconfig portion: Change the first ip_address value to the IPv4 address of the global forwarder: 7.7.9.9 . Change the second ip_address value to the IPv6 address of the global forwarder: 2001:db8::1:0 . Verify the port value is set to 53 . Change the state to present . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory 30.9. Ensuring the absence of a DNS global forwarder in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure the absence of a DNS global forwarder in IdM. In the example procedure below, the IdM administrator ensures the absence of a DNS global forwarder with an Internet Protocol (IP) v4 address of 8.8.6.6 and IP v6 address of 2001:4860:4860::8800 on port 53 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-absence-of-a-global-forwarder.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the absence of a global forwarder in IdM DNS . In the tasks section, change the name of the task to Ensure the absence of a DNS global forwarder to 8.8.6.6 and 2001:4860:4860::8800 on port 53 . In the forwarders section of the ipadnsconfig portion: Change the first ip_address value to the IPv4 address of the global forwarder: 8.8.6.6 . Change the second ip_address value to the IPv6 address of the global forwarder: 2001:4860:4860::8800 . Verify the port value is set to 53 . Set the action variable to member . Verify the state is set to absent . This the modified Ansible playbook file for the current example: Important If you only use the state: absent option in your playbook without also using action: member , the playbook fails. Save the file. Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory The action: member option in ipadnsconfig ansible-freeipa modules 30.10. Ensuring DNS Global Forwarders are disabled in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure DNS Global Forwarders are disabled in IdM. In the example procedure below, the IdM administrator ensures that the forwarding policy for the global forwarder is set to none , which effectively disables the global forwarder. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Verify the contents of the disable-global-forwarders.yml Ansible playbook file which is already configured to disable all DNS global forwarders. For example: Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory 30.11. Ensuring the presence of a DNS Forward Zone in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure the presence of a DNS Forward Zone in IdM. In the example procedure below, the IdM administrator ensures the presence of a DNS forward zone for example.com to a DNS server with an Internet Protocol (IP) address of 8.8.8.8 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-presence-forwardzone.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the presence of a dnsforwardzone in IdM DNS . In the tasks section, change the name of the task to Ensure presence of a dnsforwardzone for example.com to 8.8.8.8 . In the tasks section, change the ipadnsconfig heading to ipadnsforwardzone . In the ipadnsforwardzone section: Add the ipaadmin_password variable and set it to your IdM administrator password. Add the name variable and set it to example.com . In the forwarders section: Remove the ip_address and port lines. Add the IP address of the DNS server to receive forwarded requests by specifying it after a dash: Add the forwardpolicy variable and set it to first . Add the skip_overlap_check variable and set it to true . Change the state variable to present . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsforwardzone.md file in the /usr/share/doc/ansible-freeipa/ directory. 30.12. Ensuring a DNS Forward Zone has multiple forwarders in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure a DNS Forward Zone in IdM has multiple forwarders. In the example procedure below, the IdM administrator ensures the DNS forward zone for example.com is forwarding to 8.8.8.8 and 4.4.4.4 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-presence-multiple-forwarders.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the presence of multiple forwarders in a dnsforwardzone in IdM DNS . In the tasks section, change the name of the task to Ensure presence of 8.8.8.8 and 4.4.4.4 forwarders in dnsforwardzone for example.com . In the tasks section, change the ipadnsconfig heading to ipadnsforwardzone . In the ipadnsforwardzone section: Add the ipaadmin_password variable and set it to your IdM administrator password. Add the name variable and set it to example.com . In the forwarders section: Remove the ip_address and port lines. Add the IP address of the DNS servers you want to ensure are present, preceded by a dash: Change the state variable to present. This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsforwardzone.md file in the /usr/share/doc/ansible-freeipa/ directory. 30.13. Ensuring a DNS Forward Zone is disabled in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure a DNS Forward Zone is disabled in IdM. In the example procedure below, the IdM administrator ensures the DNS forward zone for example.com is disabled. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-disabled-forwardzone.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure a dnsforwardzone is disabled in IdM DNS . In the tasks section, change the name of the task to Ensure a dnsforwardzone for example.com is disabled . In the tasks section, change the ipadnsconfig heading to ipadnsforwardzone . In the ipadnsforwardzone section: Add the ipaadmin_password variable and set it to your IdM administrator password. Add the name variable and set it to example.com . Remove the entire forwarders section. Change the state variable to disabled . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsforwardzone.md file in the /usr/share/doc/ansible-freeipa/ directory. 30.14. Ensuring the absence of a DNS Forward Zone in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure the absence of a DNS Forward Zone in IdM. In the example procedure below, the IdM administrator ensures the absence of a DNS forward zone for example.com . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-absence-forwardzone.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the absence of a dnsforwardzone in IdM DNS . In the tasks section, change the name of the task to Ensure the absence of a dnsforwardzone for example.com . In the tasks section, change the ipadnsconfig heading to ipadnsforwardzone . In the ipadnsforwardzone section: Add the ipaadmin_password variable and set it to your IdM administrator password. Add the name variable and set it to example.com . Remove the entire forwarders section. Leave the state variable as absent . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsforwardzone.md file in the /usr/share/doc/ansible-freeipa/ directory.
[ "[user@server ~]USD ipa dnsconfig-mod --forwarder= 10.10.0.1 Server will check DNS forwarder(s). This may take some time, please wait Global forwarders: 10.10.0.1 IPA DNS servers: server.example.com", "[user@server ~]USD ipa dnsconfig-show Global forwarders: 10.10.0.1 IPA DNS servers: server.example.com", "[user@server ~]USD ipa dnsforwardzone-add forward.example.com. --forwarder= 10.10.0.14 --forwarder= 10.10.1.15 --forward-policy=first Zone name: forward.example.com. Zone forwarders: 10.10.0.14, 10.10.1.15 Forward policy: first", "[user@server ~]USD ipa dnsforwardzone-show forward.example.com. Zone name: forward.example.com. Zone forwarders: 10.10.0.14, 10.10.1.15 Forward policy: first", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig", "[ipaserver] server.idm.example.com", "cp set-configuration.yml establish-global-forwarder.yml", "--- - name: Playbook to establish a global forwarder in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create a DNS global forwarder to 8.8.6.6 and 2001:4860:4860::8800 ipadnsconfig: forwarders: - ip_address: 8.8.6.6 - ip_address: 2001:4860:4860::8800 port: 53 forward_policy: first allow_sync_ptr: true", "ansible-playbook --vault-password-file=password_file -v -i inventory.file establish-global-forwarder.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig", "[ipaserver] server.idm.example.com", "cp forwarders-absent.yml ensure-presence-of-a-global-forwarder.yml", "--- - name: Playbook to ensure the presence of a global forwarder in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the presence of a DNS global forwarder to 7.7.9.9 and 2001:db8::1:0 on port 53 ipadnsconfig: forwarders: - ip_address: 7.7.9.9 - ip_address: 2001:db8::1:0 port: 53 state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-of-a-global-forwarder.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig", "[ipaserver] server.idm.example.com", "cp forwarders-absent.yml ensure-absence-of-a-global-forwarder.yml", "--- - name: Playbook to ensure the absence of a global forwarder in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the absence of a DNS global forwarder to 8.8.6.6 and 2001:4860:4860::8800 on port 53 ipadnsconfig: forwarders: - ip_address: 8.8.6.6 - ip_address: 2001:4860:4860::8800 port: 53 action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-absence-of-a-global-forwarder.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig", "[ipaserver] server.idm.example.com", "cat disable-global-forwarders.yml --- - name: Playbook to disable global DNS forwarders hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Disable global forwarders. ipadnsconfig: forward_policy: none", "ansible-playbook --vault-password-file=password_file -v -i inventory.file disable-global-forwarders.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig", "[ipaserver] server.idm.example.com", "cp forwarders-absent.yml ensure-presence-forwardzone.yml", "- 8.8.8.8", "--- - name: Playbook to ensure the presence of a dnsforwardzone in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the presence of a dnsforwardzone for example.com to 8.8.8.8 ipadnsforwardzone: ipaadmin_password: \"{{ ipaadmin_password }}\" name: example.com forwarders: - 8.8.8.8 forwardpolicy: first skip_overlap_check: true state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-forwardzone.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig", "[ipaserver] server.idm.example.com", "cp forwarders-absent.yml ensure-presence-multiple-forwarders.yml", "- 8.8.8.8 - 4.4.4.4", "--- - name: name: Playbook to ensure the presence of multiple forwarders in a dnsforwardzone in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure presence of 8.8.8.8 and 4.4.4.4 forwarders in dnsforwardzone for example.com ipadnsforwardzone: ipaadmin_password: \"{{ ipaadmin_password }}\" name: example.com forwarders: - 8.8.8.8 - 4.4.4.4 state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-multiple-forwarders.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig", "[ipaserver] server.idm.example.com", "cp forwarders-absent.yml ensure-disabled-forwardzone.yml", "--- - name: Playbook to ensure a dnsforwardzone is disabled in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure a dnsforwardzone for example.com is disabled ipadnsforwardzone: ipaadmin_password: \"{{ ipaadmin_password }}\" name: example.com state: disabled", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-disabled-forwardzone.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig", "[ipaserver] server.idm.example.com", "cp forwarders-absent.yml ensure-absence-forwardzone.yml", "--- - name: Playbook to ensure the absence of a dnsforwardzone in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the absence of a dnsforwardzone for example.com ipadnsforwardzone: ipaadmin_password: \"{{ ipaadmin_password }}\" name: example.com state: absent", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-absence-forwardzone.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/managing-dns-forwarding-in-idm_using-ansible-to-install-and-manage-idm
Chapter 62. Apache CXF Message Processing Phases
Chapter 62. Apache CXF Message Processing Phases Inbound phases Table 62.1, "Inbound message processing phases" lists the phases available in inbound interceptor chains. Table 62.1. Inbound message processing phases Phase Description RECEIVE Performs transport specific processing, such as determining MIME boundaries for binary attachments. PRE_STREAM Processes the raw data stream received by the transport. USER_STREAM POST_STREAM READ Determines if a request is a SOAP or XML message and builds adds the proper interceptors. SOAP message headers are also processed in this phase. PRE_PROTOCOL Performs protocol level processing. This includes processing of WS-* headers and processing of the SOAP message properties. USER_PROTOCOL POST_PROTOCOL UNMARSHAL Unmarshals the message data into the objects used by the application level code. PRE_LOGICAL Processes the unmarshalled message data. USER_LOGICAL POST_LOGICAL PRE_INVOKE INVOKE Passes the message to the application code. On the server side, the service implementation is invoked in this phase. On the client side, the response is handed back to the application. POST_INVOKE Invokes the outbound interceptor chain. Outbound phases Table 62.2, "Inbound message processing phases" lists the phases available in inbound interceptor chains. Table 62.2. Inbound message processing phases Phase Description SETUP Performs any set up that is required by later phases in the chain. PRE_LOGICAL Performs processing on the unmarshalled data passed from the application level. USER_LOGICAL POST_LOGICAL PREPARE_SEND Opens the connection for writing the message on the wire. PRE_STREAM Performs processing required to prepare the message for entry into a data stream. PRE_PROTOCOL Begins processing protocol specific information. WRITE Writes the protocol message. PRE_MARSHAL Marshals the message. MARSHAL POST_MARSHAL USER_PROTOCOL Process the protocol message. POST_PROTOCOL USER_STREAM Process the byte-level message. POST_STREAM SEND Sends the message and closes the transport stream. Important Outbound interceptor chains have a mirror set of ending phases whose names are appended with _ENDING . The ending phases are used interceptors that require some terminal action to occur before data is written on the wire.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFInterceptPhasesAppx
Preface
Preface Debezium is a set of distributed services that capture row-level changes in your databases so that your applications can see and respond to those changes. Debezium records all row-level changes committed to each database table. Each application reads the transaction logs of interest to view all operations in the order in which they occurred. This guide provides details about using the following Debezium topics: Chapter 1, High level overview of Debezium Chapter 2, Required custom resource upgrades Chapter 3, Debezium connector for Db2 Chapter 4, Debezium connector for JDBC (Developer Preview) Developer Preview Chapter 5, Debezium connector for MongoDB Chapter 6, Debezium connector for MySQL Chapter 7, Debezium Connector for Oracle Chapter 8, Debezium connector for PostgreSQL Chapter 9, Debezium connector for SQL Server Chapter 10, Monitoring Debezium Chapter 11, Debezium logging Chapter 12, Configuring Debezium connectors for your application Chapter 13, Applying transformations to modify messages exchanged with Apache Kafka Chapter 14, Developing Debezium custom data type converters Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/debezium_user_guide/pr01
Monitoring APIs
Monitoring APIs OpenShift Container Platform 4.17 Reference guide for monitoring APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring_apis/index
Chapter 7. Assessing and analyzing applications with MTA
Chapter 7. Assessing and analyzing applications with MTA You can use the Migration Toolkit for Applications (MTA) user interface to assess and analyze applications: When adding to or editing the Application Inventory , MTA automatically spawns programming language and technology discovery tasks. The tasks apply appropriate tags to the application, reducing the time you spend tagging the application manually. When assessing applications, MTA estimates the risks and costs involved in preparing applications for containerization, including time, personnel, and other factors. You can use the results of an assessment for discussions between stakeholders to determine whether applications are suitable for containerization. When analyzing applications, MTA uses rules to determine which specific lines in an application must be modified before the application can be migrated or modernized. 7.1. The Assessment module features The Migration Toolkit for Applications (MTA) Assessment module offers the following features for assessing and analyzing applications: Assessment hub The Assessment hub integrates with the Application inventory . Enhanced assessment questionnaire capabilities In MTA 7.0, you can import and export assessment questionnaires. You can also design custom questionnaires with a downloadable template by using the YAML syntax, which includes the following features: Conditional questions: You can include or exclude questions based on the application or archetype if a certain tag is present on this application or archetype. Application auto-tagging based on answers: You can define tags to be applied to applications or archetypes if a certain answer was provided. Automated answers from tags in applications or archetypes. For more information, see The custom assessment questionnaire . Note You can customize and save the default questionnaire. For more information, see The default assessment questionnaire . Multiple assessment questionnaires The Assessment module supports multiple questionnaires, relevant to one or more applications. Archetypes You can group applications with similar characteristics into archetypes. This allows you to assess multiple applications at once. Each archetype has a shared taxonomy of tags, stakeholders, and stakeholder groups. All applications inherit assessment and review from their assigned archetypes. For more information, see Working with archetypes . 7.2. MTA assessment questionnaires The Migration Toolkit for Applications (MTA) uses an assessment questionnaire, either default or custom , to assess the risks involved in containerizing an application. The assessment report provides information about applications and risks associated with migration. The report also generates an adoption plan informed by the prioritization, business criticality, and dependencies of the applications submitted for assessment. 7.2.1. The default assessment questionnaire Legacy Pathfinder is the default Migration Toolkit for Applications (MTA) questionnaire. Pathfinder is a questionnaire-based tool that you can use to evaluate the suitability of applications for modernization in containers on an enterprise Kubernetes platform. Through interaction with the default questionnaire and the review process, the system is enriched with application knowledge exposed through the collection of assessment reports. You can export the default questionnaire to a YAML file: Example 7.1. The Legacy Pathfinder YAML file name: Legacy Pathfinder description: '' sections: - order: 1 name: Application details questions: - order: 1 text: >- Does the application development team understand and actively develop the application? explanation: >- How much knowledge does the team have about the application's development or usage? answers: - order: 2 text: >- Maintenance mode, no SME knowledge or adequate documentation available risk: red rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Little knowledge, no development (example: third-party or commercial off-the-shelf application) risk: red rationale: '' mitigation: '' - order: 3 text: Maintenance mode, SME knowledge is available risk: yellow rationale: '' mitigation: '' - order: 4 text: Actively developed, SME knowledge is available risk: green rationale: '' mitigation: '' - order: 5 text: greenfield application risk: green rationale: '' mitigation: '' - order: 2 text: How is the application supported in production? explanation: >- Does the team have sufficient knowledge to support the application in production? answers: - order: 3 text: >- Multiple teams provide support using an established escalation model risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- External support provider with a ticket-driven escalation process; no inhouse support resources risk: red rationale: '' mitigation: '' - order: 2 text: >- Separate internal support team, separate from the development team, with little interaction between the teams risk: red rationale: '' mitigation: '' - order: 4 text: >- SRE (Site Reliability Engineering) approach with a knowledgeable and experienced operations team risk: green rationale: '' mitigation: '' - order: 5 text: >- DevOps approach with the same team building the application and supporting it in production risk: green rationale: '' mitigation: '' - order: 3 text: >- How much time passes from when code is committed until the application is deployed to production? explanation: What is the development latency? answers: - order: 3 text: 2-6 months risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Not tracked risk: red rationale: '' mitigation: '' - order: 2 text: More than 6 months risk: red rationale: '' mitigation: '' - order: 4 text: 8-30 days risk: green rationale: '' mitigation: '' - order: 5 text: 1-7 days risk: green rationale: '' mitigation: '' - order: 6 text: Less than 1 day risk: green rationale: '' mitigation: '' - order: 4 text: How often is the application deployed to production? explanation: Deployment frequency answers: - order: 3 text: Between once a month and once every 6 months risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Not tracked risk: red rationale: '' mitigation: '' - order: 2 text: Less than once every 6 months risk: red rationale: '' mitigation: '' - order: 4 text: Weekly risk: green rationale: '' mitigation: '' - order: 5 text: Daily risk: green rationale: '' mitigation: '' - order: 6 text: Several times a day risk: green rationale: '' mitigation: '' - order: 5 text: >- What is the application's mean time to recover (MTTR) from failure in a production environment? explanation: Average time for the application to recover from failure answers: - order: 5 text: Less than 1 hour risk: green rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Not tracked risk: red rationale: '' mitigation: '' - order: 3 text: 1-7 days risk: yellow rationale: '' mitigation: '' - order: 2 text: 1 month or more risk: red rationale: '' mitigation: '' - order: 4 text: 1-24 hours risk: green rationale: '' mitigation: '' - order: 6 text: Does the application have legal and/or licensing requirements? explanation: >- Legal and licensing requirements must be assessed to determine their possible impact (cost, fault reporting) on the container platform hosting the application. Examples of legal requirements: isolated clusters, certifications, compliance with the Payment Card Industry Data Security Standard or the Health Insurance Portability and Accountability Act. Examples of licensing requirements: per server, per CPU. answers: - order: 1 text: Multiple legal and licensing requirements risk: red rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 2 text: 'Licensing requirements (examples: per server, per CPU)' risk: red rationale: '' mitigation: '' - order: 3 text: >- Legal requirements (examples: cluster isolation, hardware, PCI or HIPAA compliance) risk: yellow rationale: '' mitigation: '' - order: 4 text: None risk: green rationale: '' mitigation: '' - order: 7 text: Which model best describes the application architecture? explanation: Describe the application architecture in simple terms. answers: - order: 3 text: >- Complex monolith, strict runtime dependency startup order, non-resilient architecture risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 5 text: Independently deployable components risk: green rationale: '' mitigation: '' - order: 1 text: >- Massive monolith (high memory and CPU usage), singleton deployment, vertical scale only risk: yellow rationale: '' mitigation: '' - order: 2 text: >- Massive monolith (high memory and CPU usage), non-singleton deployment, complex to scale horizontally risk: yellow rationale: '' mitigation: '' - order: 4 text: 'Resilient monolith (examples: retries, circuit breakers)' risk: green rationale: '' mitigation: '' - order: 2 name: Application dependencies questions: - order: 1 text: Does the application require specific hardware? explanation: >- OpenShift Container Platform runs only on x86, IBM Power, or IBM Z systems answers: - order: 3 text: 'Requires specific computer hardware (examples: GPUs, RAM, HDDs)' risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Requires CPU that is not supported by red Hat risk: red rationale: '' mitigation: '' - order: 2 text: 'Requires custom or legacy hardware (example: USB device)' risk: red rationale: '' mitigation: '' - order: 4 text: Requires CPU that is supported by red Hat risk: green rationale: '' mitigation: '' - order: 2 text: What operating system does the application require? explanation: >- Only Linux and certain Microsoft Windows versions are supported in containers. Check the latest versions and requirements. answers: - order: 4 text: Microsoft Windows risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Operating system that is not compatible with OpenShift Container Platform (examples: OS X, AIX, Unix, Solaris) risk: red rationale: '' mitigation: '' - order: 2 text: Linux with custom kernel drivers or a specific kernel version risk: red rationale: '' mitigation: '' - order: 3 text: 'Linux with custom capabilities (examples: seccomp, root access)' risk: yellow rationale: '' mitigation: '' - order: 5 text: Standard Linux distribution risk: green rationale: '' mitigation: '' - order: 3 text: >- Does the vendor provide support for a third-party component running in a container? explanation: Will the vendor support a component if you run it in a container? answers: - order: 2 text: No vendor support for containers risk: red rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Not recommended to run the component in a container risk: red rationale: '' mitigation: '' - order: 3 text: >- Vendor supports containers but with limitations (examples: functionality is restricted, component has not been tested) risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Vendor supports their application running in containers but you must build your own images risk: yellow rationale: '' mitigation: '' - order: 5 text: Vendor fully supports containers, provides certified images risk: green rationale: '' mitigation: '' - order: 6 text: No third-party components required risk: green rationale: '' mitigation: '' - order: 4 text: Incoming/northbound dependencies explanation: Systems or applications that call the application answers: - order: 3 text: >- Many dependencies exist, can be changed because the systems are internally managed risk: green rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 4 text: Internal dependencies only risk: green rationale: '' mitigation: '' - order: 1 text: >- Dependencies are difficult or expensive to change because they are legacy or third-party risk: red rationale: '' mitigation: '' - order: 2 text: >- Many dependencies exist, can be changed but the process is expensive and time-consuming risk: yellow rationale: '' mitigation: '' - order: 5 text: No incoming/northbound dependencies risk: green rationale: '' mitigation: '' - order: 5 text: Outgoing/southbound dependencies explanation: Systems or applications that the application calls answers: - order: 3 text: Application not ready until dependencies are verified available risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Dependency availability only verified when application is processing traffic risk: red rationale: '' mitigation: '' - order: 2 text: Dependencies require a complex and strict startup order risk: yellow rationale: '' mitigation: '' - order: 4 text: Limited processing available if dependencies are unavailable risk: green rationale: '' mitigation: '' - order: 5 text: No outgoing/southbound dependencies risk: green rationale: '' mitigation: '' - order: 3 name: Application architecture questions: - order: 1 text: >- How resilient is the application? How well does it recover from outages and restarts? explanation: >- If the application or one of its dependencies fails, how does the application recover from failure? Is manual intervention required? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Application cannot be restarted cleanly after failure, requires manual intervention risk: red rationale: '' mitigation: '' - order: 2 text: >- Application fails when a soutbound dependency is unavailable and does not recover automatically risk: red rationale: '' mitigation: '' - order: 3 text: >- Application functionality is limited when a dependency is unavailable but recovers when the dependency is available risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Application employs resilient architecture patterns (examples: circuit breakers, retry mechanisms) risk: green rationale: '' mitigation: '' - order: 5 text: >- Application containers are randomly terminated to test resiliency; chaos engineering principles are followed risk: green rationale: '' mitigation: '' - order: 2 text: How does the external world communicate with the application? explanation: >- What protocols do external clients use to communicate with the application? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: 'Non-TCP/IP protocols (examples: serial, IPX, AppleTalk)' risk: red rationale: '' mitigation: '' - order: 2 text: TCP/IP, with host name or IP address encapsulated in the payload risk: red rationale: '' mitigation: '' - order: 3 text: 'TCP/UDP without host addressing (example: SSH)' risk: yellow rationale: '' mitigation: '' - order: 4 text: TCP/UDP encapsulated, using TLS with SNI header risk: green rationale: '' mitigation: '' - order: 5 text: HTTP/HTTPS risk: green rationale: '' mitigation: '' - order: 3 text: How does the application manage its internal state? explanation: >- If the application must manage or retain an internal state, how is this done? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 3 text: State maintained in non-shared, non-ephemeral storage risk: yellow rationale: '' mitigation: '' - order: 1 text: Application components use shared memory within a pod risk: yellow rationale: '' mitigation: '' - order: 2 text: >- State is managed externally by another product (examples: Zookeeper or red Hat Data Grid) risk: yellow rationale: '' mitigation: '' - order: 4 text: Disk shared between application instances risk: green rationale: '' mitigation: '' - order: 5 text: Stateless or ephemeral container storage risk: green rationale: '' mitigation: '' - order: 4 text: How does the application handle service discovery? explanation: How does the application discover services? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Uses technologies that are not compatible with Kubernetes (examples: hardcoded IP addresses, custom cluster manager) risk: red rationale: '' mitigation: '' - order: 2 text: >- Requires an application or cluster restart to discover new service instances risk: red rationale: '' mitigation: '' - order: 3 text: >- Uses technologies that are compatible with Kubernetes but require specific libraries or services (examples: HashiCorp Consul, Netflix Eureka) risk: yellow rationale: '' mitigation: '' - order: 4 text: Uses Kubernetes DNS name resolution risk: green rationale: '' mitigation: '' - order: 5 text: Does not require service discovery risk: green rationale: '' mitigation: '' - order: 5 text: How is the application clustering managed? explanation: >- Does the application require clusters? If so, how is clustering managed? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: 'Manually configured clustering (example: static clusters)' risk: red rationale: '' mitigation: '' - order: 2 text: Managed by an external off-PaaS cluster manager risk: red rationale: '' mitigation: '' - order: 3 text: >- Managed by an application runtime that is compatible with Kubernetes risk: green rationale: '' mitigation: '' - order: 4 text: No cluster management required risk: green rationale: '' mitigation: '' - order: 4 name: Application observability questions: - order: 1 text: How does the application use logging and how are the logs accessed? explanation: How the application logs are accessed answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Logs are unavailable or are internal with no way to export them risk: red rationale: '' mitigation: '' - order: 2 text: >- Logs are in a custom binary format, exposed with non-standard protocols risk: red rationale: '' mitigation: '' - order: 3 text: Logs are exposed using syslog risk: yellow rationale: '' mitigation: '' - order: 4 text: Logs are written to a file system, sometimes as multiple files risk: yellow rationale: '' mitigation: '' - order: 5 text: 'Logs are forwarded to an external logging system (example: Splunk)' risk: green rationale: '' mitigation: '' - order: 6 text: 'Logs are configurable (example: can be sent to stdout)' risk: green rationale: '' mitigation: '' - order: 2 text: Does the application provide metrics? explanation: >- Are application metrics available, if necessary (example: OpenShift Container Platform collects CPU and memory metrics)? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: No metrics available risk: yellow rationale: '' mitigation: '' - order: 2 text: Metrics collected but not exposed externally risk: yellow rationale: '' mitigation: '' - order: 3 text: 'Metrics exposed using binary protocols (examples: SNMP, JMX)' risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Metrics exposed using a third-party solution (examples: Dynatrace, AppDynamics) risk: green rationale: '' mitigation: '' - order: 5 text: >- Metrics collected and exposed with built-in Prometheus endpoint support risk: green rationale: '' mitigation: '' - order: 3 text: >- How easy is it to determine the application's health and readiness to handle traffic? explanation: >- How do we determine an application's health (liveness) and readiness to handle traffic? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: No health or readiness query functionality available risk: red rationale: '' mitigation: '' - order: 3 text: Basic application health requires semi-complex scripting risk: yellow rationale: '' mitigation: '' - order: 4 text: Dedicated, independent liveness and readiness endpoints risk: green rationale: '' mitigation: '' - order: 2 text: Monitored and managed by a custom watchdog process risk: red rationale: '' mitigation: '' - order: 5 text: Health is verified by probes running synthetic transactions risk: green rationale: '' mitigation: '' - order: 4 text: What best describes the application's runtime characteristics? explanation: >- How would the profile of an application appear during runtime (examples: graphs showing CPU and memory usage, traffic patterns, latency)? What are the implications for a serverless application? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Deterministic and predictable real-time execution or control requirements risk: red rationale: '' mitigation: '' - order: 2 text: >- Sensitive to latency (examples: voice applications, high frequency trading applications) risk: yellow rationale: '' mitigation: '' - order: 3 text: Constant traffic with a broad range of CPU and memory usage risk: yellow rationale: '' mitigation: '' - order: 4 text: Intermittent traffic with predictable CPU and memory usage risk: green rationale: '' mitigation: '' - order: 5 text: Constant traffic with predictable CPU and memory usage risk: green rationale: '' mitigation: '' - order: 5 text: How long does it take the application to be ready to handle traffic? explanation: How long the application takes to boot answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: More than 5 minutes risk: red rationale: '' mitigation: '' - order: 2 text: 2-5 minutes risk: yellow rationale: '' mitigation: '' - order: 3 text: 1-2 minutes risk: yellow rationale: '' mitigation: '' - order: 4 text: 10-60 seconds risk: green rationale: '' mitigation: '' - order: 5 text: Less than 10 seconds risk: green rationale: '' mitigation: '' - order: 5 name: Application cross-cutting concerns questions: - order: 1 text: How is the application tested? explanation: >- Is the application is tested? Is it easy to test (example: automated testing)? Is it tested in production? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: No testing or minimal manual testing only risk: red rationale: '' mitigation: '' - order: 2 text: Minimal automated testing, focused on the user interface risk: yellow rationale: '' mitigation: '' - order: 3 text: >- Some automated unit and regression testing, basic CI/CD pipeline testing; modern test practices are not followed risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Highly repeatable automated testing (examples: unit, integration, smoke tests) before deploying to production; modern test practices are followed risk: green rationale: '' mitigation: '' - order: 5 text: >- Chaos engineering approach, constant testing in production (example: A/B testing + experimentation) risk: green rationale: '' mitigation: '' - order: 2 text: How is the application configured? explanation: >- How is the application configured? Is the configuration method appropriate for a container? External servers are runtime dependencies. answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Configuration files compiled during installation and configured using a user interface risk: red rationale: '' mitigation: '' - order: 2 text: >- Configuration files are stored externally (example: in a database) and accessed using specific environment keys (examples: host name, IP address) risk: red rationale: '' mitigation: '' - order: 3 text: Multiple configuration files in multiple file system locations risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Configuration files built into the application and enabled using system properties at runtime risk: yellow rationale: '' mitigation: '' - order: 5 text: >- Configuration retrieved from an external server (examples: Spring Cloud Config Server, HashiCorp Consul) risk: yellow rationale: '' mitigation: '' - order: 6 text: >- Configuration loaded from files in a single configurable location; environment variables used risk: green rationale: '' mitigation: '' - order: 4 text: How is the application deployed? explanation: >- How the application is deployed and whether the deployment process is suitable for a container platform answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 3 text: Simple automated deployment scripts risk: yellow rationale: '' mitigation: '' - order: 1 text: Manual deployment using a user interface risk: red rationale: '' mitigation: '' - order: 2 text: Manual deployment with some automation risk: red rationale: '' mitigation: '' - order: 4 text: >- Automated deployment with manual intervention or complex promotion through pipeline stages risk: yellow rationale: '' mitigation: '' - order: 5 text: >- Automated deployment with a full CI/CD pipeline, minimal intervention for promotion through pipeline stages risk: green rationale: '' mitigation: '' - order: 6 text: Fully automated (GitOps), blue-green, or canary deployment risk: green rationale: '' mitigation: '' - order: 5 text: Where is the application deployed? explanation: Where does the application run? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Bare metal server risk: green rationale: '' mitigation: '' - order: 2 text: 'Virtual machine (examples: red Hat Virtualization, VMware)' risk: green rationale: '' mitigation: '' - order: 3 text: 'Private cloud (example: red Hat OpenStack Platform)' risk: green rationale: '' mitigation: '' - order: 4 text: >- Public cloud provider (examples: Amazon Web Services, Microsoft Azure, Google Cloud Platform) risk: green rationale: '' mitigation: '' - order: 5 text: >- Platform as a service (examples: Heroku, Force.com, Google App Engine) risk: yellow rationale: '' mitigation: '' - order: 7 text: Other. Specify in the comments field risk: yellow rationale: '' mitigation: '' - order: 6 text: Hybrid cloud (public and private cloud providers) risk: green rationale: '' mitigation: '' - order: 6 text: How mature is the containerization process, if any? explanation: If the team has used containers in the past, how was it done? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Application runs in a container on a laptop or desktop risk: red rationale: '' mitigation: '' - order: 3 text: Some experience with containers but not yet fully defined risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Proficient with containers and container platforms (examples: Swarm, Kubernetes) risk: green rationale: '' mitigation: '' - order: 5 text: Application containerization has not yet been attempted risk: green rationale: '' mitigation: '' - order: 3 text: How does the application acquire security keys or certificates? explanation: >- How does the application retrieve credentials, keys, or certificates? External systems are runtime dependencies. answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Hardware security modules or encryption devices risk: red rationale: '' mitigation: '' - order: 2 text: >- Keys/certificates bound to IP addresses and generated at runtime for each application instance risk: red rationale: '' mitigation: '' - order: 3 text: Keys/certificates compiled into the application risk: yellow rationale: '' mitigation: '' - order: 4 text: Loaded from a shared disk risk: yellow rationale: '' mitigation: '' - order: 5 text: >- Retrieved from an external server (examples: HashiCorp Vault, CyberArk Conjur) risk: yellow rationale: '' mitigation: '' - order: 6 text: Loaded from files risk: green rationale: '' mitigation: '' - order: 7 text: Not required risk: green rationale: '' mitigation: '' thresholds: red: 5 yellow: 30 unknown: 5 riskMessages: red: '' yellow: '' green: '' unknown: '' builtin: true 7.2.2. The custom assessment questionnaire You can use the Migration Toolkit for Applications (MTA) to import a custom assessment questionnaire by using a custom YAML syntax to define the questionnaire. The YAML syntax supports the following features: Conditional questions The YAML syntax supports including or excluding questions based on tags existing on the application or archetype, for example: If the application or archetype has the Language/Java tag, the What is the main JAVA framework used in your application? question is included in the questionnaire: ... questions: - order: 1 text: What is the main JAVA framework used in your application? explanation: Identify the primary JAVA framework used in your application. includeFor: - category: Language tag: Java ... If the application or archetype has the Deployment/Serverless and Architecture/Monolith tag, the Are you currently using any form of container orchestration? question is excluded from the questionnaire: ... questions: - order: 4 text: Are you currently using any form of container orchestration? explanation: Determine if the application utilizes container orchestration tools like Kubernetes, Docker Swarm, etc. excludeFor: - category: Deployment tag: Serverless - category: Architecture tag: Monolith ... Automated answers based on tags present on the assessed application or archetype Automated answers are selected based on the tags existing on the application or archetype. For example, if an application or archetype has the Runtime/Quarkus tag, the Quarkus answer is automatically selected, and if an application or archetype has the Runtime/Spring Boot tag, the Spring Boot answer is automatically selected: ... text: What is the main technology in your application? explanation: Identify the main framework or technology used in your application. answers: - order: 1 text: Quarkus risk: green autoAnswerFor: - category: Runtime tag: Quarkus - order: 2 text: Spring Boot risk: green autoAnswerFor: - category: Runtime tag: Spring Boot ... Automatic tagging of applications based on answers During the assessment, tags are automatically applied to the application or archetype based on the answer if this answer is selected. Note that the tags are transitive. Therefore, the tags are removed if the assessment is discarded. Each tag is defined by the following elements: category : Category of the target tag ( String ). tag : Definition for the target tag as ( String ). For example, if the selected answer is Quarkus , the Runtime/Quarkus tag is applied to the assessed application or archetype. If the selected answer is Spring Boot , the Runtime/Spring Boot tag is applied to the assessed application or archetype: ... questions: - order: 1 text: What is the main technology in your application? explanation: Identify the main framework or technology used in your application. answers: - order: 1 text: Quarkus risk: green applyTags: - category: Runtime tag: Quarkus - order: 2 text: Spring Boot risk: green applyTags: - category: Runtime tag: Spring Boot ... 7.2.2.1. The YAML template for the custom questionnaire You can use the following YAML template to build your custom questionnaire. You can download this template by clicking Download YAML template on the Assessment questionnaires page. Example 7.2. The YAML template for the custom questionnaire name: Uploadable Cloud Readiness Questionnaire Template description: This questionnaire is an example template for assessing cloud readiness. It serves as a guide for users to create and customize their own questionnaire templates. required: true sections: - order: 1 name: Application Technologies questions: - order: 1 text: What is the main technology in your application? explanation: Identify the main framework or technology used in your application. includeFor: - category: Language tag: Java answers: - order: 1 text: Quarkus risk: green rationale: Quarkus is a modern, container-friendly framework. mitigation: No mitigation needed. applyTags: - category: Runtime tag: Quarkus autoAnswerFor: - category: Runtime tag: Quarkus - order: 2 text: Spring Boot risk: green rationale: Spring Boot is versatile and widely used. mitigation: Ensure container compatibility. applyTags: - category: Runtime tag: Spring Boot autoAnswerFor: - category: Runtime tag: Spring Boot - order: 3 text: Legacy Monolithic Application risk: red rationale: Legacy monoliths are challenging for cloud adaptation. mitigation: Consider refactoring into microservices. - order: 2 text: Does your application use a microservices architecture? explanation: Assess if the application is built using a microservices architecture. answers: - order: 1 text: Yes risk: green rationale: Microservices are well-suited for cloud environments. mitigation: Continue monitoring service dependencies. - order: 2 text: No risk: yellow rationale: Non-microservices architectures may face scalability issues. mitigation: Assess the feasibility of transitioning to microservices. - order: 3 text: Unknown risk: unknown rationale: Lack of clarity on architecture can lead to unplanned issues. mitigation: Conduct an architectural review. - order: 3 text: Is your application's data storage cloud-optimized? explanation: Evaluate if the data storage solution is optimized for cloud usage. includeFor: - category: Language tag: Java answers: - order: 1 text: Cloud-Native Storage Solution risk: green rationale: Cloud-native solutions offer scalability and resilience. mitigation: Ensure regular backups and disaster recovery plans. - order: 2 text: Traditional On-Premises Storage risk: red rationale: Traditional storage might not scale well in the cloud. mitigation: Explore cloud-based storage solutions. - order: 3 text: Hybrid Storage Approach risk: yellow rationale: Hybrid solutions may have integration complexities. mitigation: Evaluate and optimize cloud integration points. thresholds: red: 1 yellow: 30 unknown: 15 riskMessages: red: Requires deep changes in architecture or lifecycle yellow: Cloud friendly but needs minor changes green: Cloud Native unknown: More information needed Additional resources The custom questionnaire fields 7.2.2.2. The custom questionnaire fields Every custom questionnaire field marked as required is mandatory and must be completed. Otherwise, the YAML syntax will not validate on upload. Each subsection of the field defines a new structure or object in YAML, for example: ... name: Testing thresholds: red: 30 yellow: 45 unknown: 5 ... Table 7.1. The custom questionnaire fields Questionnaire field Description name (required, string) The name of the questionnaire. This field must be unique for the entire MTA instance. description (optional, string) A short description of the questionnaire. thresholds (required) The definition of a threshold for each risk category of the application or archetype that is considered to be affected by that risk level. The threshold values can be the following: red (required, unsigned integer): Numeric percentage, for example, 30 for 30% , of red answers that the questionnaire can have until the risk level is considered red. yellow (required, unsigned integer): Numeric percentage, for example, 30 for 30% , of yellow answers that the questionnaire can have until the risk level is considered yellow. unknown (required, unsigned integer): Numeric percentage, for example, 30 for 30% , of unknown answers that the questionnaire can have until the risk level is considered unknown. The higher risk level always takes precedence. For example, if the yellow threshold is set to 30% and red to 5%, and the answers for the application or archetype are set to have 35% yellow and 6% red , the risk level for the application or archetype is red. riskMessages (required) Messages to be displayed in reports for each risk category. The risk_messages map is defined by the following fields: red (required, string): A message to be displayed in reports for the red risk level. yellow (required, string): A message to be displayed in reports for the yellow risk level. green (required, string): A message to be displayed in reports for the green risk level. unknown (required, string): A message to be displayed in reports for the unknown risk level. sections (required) A list of sections that the questionnaire must include. name (required, string): A name to be displayed for the section. order (required, integer): An order of the question in the section. comment (optional, string): A description the section. questions (required): A list of questions that belong to the section. order (required, integer): An order of the question in the section. text (required, string): A question to be asked. explanation (optional, string): An additional explanation for the question. includeFor (optional): A list that defines if a question must be displayed if any of the tags included in this list is present in the target application or archetype. category (required, string): A category of the target tag. tag (required, string): A target tag. excludeFor (optional): A list that defines if a question must be skipped if any of the tags included in the list is present in the target application or archetype. category (required, string): A category of the target tag. tag (required, string): A target tag. answers (required): A list of answers for the given question. order (required, integer): An order of the question in the section. text (required, string): An answer for the question. risk (required): An implied risk level (red, yellow, green, or unknown) of the current answer. rationale (optional, string): A justification for the answer that is being considered a risk. mitigation (optional, string): An explanation of the potential mitigation strategy for the risk implied by the answer. applyTags (optional): A list of tags to be automatically applied to the assessed application or archetype if this answer is selected. category (required, string): A category of the target tag. tag (required,string): A target tag. autoAnswerFor (optional, list): A list of tags that will lead to this answer being automatically selected when the application or archetype is assessed. category (required, string): A category of the target tag. tag (required, string): A target tag. Additional resources The YAML template for the custom questionnaire 7.3. Managing assessment questionnaires By using the MTA user interface, you can perform the following actions on assessment questionnaires: Display the questionnaire. You can also diplay the answer choices and their associated risk weight. Export the questionnaire to the desired location on your system. Import the questionnaire from your system. Warning The name of the imported questionnaire must be unique. If the name, which is defined in the YAML syntax ( name:<name of questionnaire> ), is duplicated, the import will fail with the following error message: UNIQUE constraint failed: Questionnaire.Name . Delete an assessment questionnaire. Warning When you delete the questionnaire, its answers for all applications that use it in all archetypes are also deleted. Important You cannot delete the Legacy Pathfinder default questionnaire. Procedure Depending on your scenario, perform one of the following actions: Display the quest the assessment questionnaire: In the Administration view, select Assessment questionnaires . Click the Options menu ( ). Select View for the questionnaire you want to display. Optional: Click the arrow to the left from the question to display the answer choices and their risk weight. Export the assessment questionnaire: In the Administration view, select Assessment questionnaires . Select the desired questionnaire. Click the Options menu ( ). Select Export . Select the location of the download. Click Save . Import the assessment questionnaire: In the Administration view, select Assessment questionnaires . Click Import questionnaire . Click Upload . Navigate to the location of your questionnaire. Click Open . Import the desired questionnaire by clicking Import . Delete the assessment questionnaire: In the Administration view, select Assessment questionnaires . Select the questionnaire you want to delete. Click the Options menu ( ). Select Delete . Confirm deleting by entering on the Name of the questionnaire. Additional resources The default assessment questionnaire The custom assessment questionnaire 7.4. Assessing an application You can estimate the risks and costs involved in preparing applications for containerization by performing application assessment. You can assess an application and display the currently saved assessments by using the Assessment module. The Migration Toolkit for Applications (MTA) assesses applications according to a set of questions relevant to the application, such as dependencies. To assess the application, you can use the default Legacy Pathfinder MTA questionnaire or import your custom questionnaires. Important You can assess only one application at a time. Prerequisites You are logged in to an MTA server. Procedure In the MTA user interface, select the Migration view. Click Application inventory in the left menu bar. A list of the available applications appears in the main pane. Select the application you want to assess. Click the Options menu ( ) at the right end of the row and select Assess from the drop-down menu. From the list of available questionnaires, click Take for the desired questionnaire. Select Stakeholders and Stakeholder groups from the lists to track who contributed to the assessment for future reference. Note You can also add Stakeholder Groups or Stakeholders in the Controls pane of the Migration view. For more information, see Seeding an instance . Click . Answer each Application assessment question and click . Click Save to review the assessment and proceed with the steps in Reviewing an application . Note If you are seeing false positives in an application that is not fully resolvable, then this is not entirely unexpected. The reason, is that MTA cannot discover the class is that is being called. Therefore, MTA cannot determine whether it is a valid match or not. When this happens, MTA defaults to exposing more information than less. In this situation, the following solutions are suggested: Ensure that the maven settings can get all the dependencies. Ensure the application can be fully compiled. Additional resources The default assessment questionnaire The custom assessment questionnaire Managing assessment questionnaires 7.5. Reviewing an application You can use the Migration Toolkit for Applications (MTA) user interface to determine the migration strategy and work priority for each application. Important You can review only one application at a time. Procedure In the Migration view, click Application inventory . Select the application you want to review. Review the application by performing either of the following actions: Click Save and Review while assessing the application. For more information, see Assessing an application . Click the Options menu ( ) at the right end of the row and select Review from the drop-down list. The application Review parameters appear in the main pane. Click Proposed action and select the action. Click Effort estimate and set the level of effort required to perform the assessment with the selected questionnaire. In the Business criticality field, enter how critical the application is to the business. In the Work priority field, enter the application's priority. Optional: Enter the assessment questionnaire comments in the Comments field. Click Submit review . The fields from Review are now populated on the Application details page. 7.6. Reviewing an assessment report An MTA assessment report displays an aggregated assessment of the data obtained from multiple questionnaires for multiple applications. Procedure In the Migration view, click Reports . The aggregated assessment report for all applications is displayed. Depending on your scenario, perform one of the following actions: Display a report on the data from a particular questionnaire: Select the required questionnaire from a drop-down list of all questionnaires in the Current landscape pane of the report. By default, all questionnaires are selected. In the Identified risks pane of the report, sort the displayed list by application name, level of risk, questionnaire, questionnaire section, question, and answer. Display a report for a specific application: Click the link in the Applications column in the Identified risks pane of the report. The Application inventory page opens. The applications included in the link are displayed as a list. Click the required application. The Assessment side pane opens. To see the assessed risk level for the application, open the Details tab. To see the details of the assessment, open the Reviews tab. 7.7. Tagging an application You can attach various tags to the application that you are analyzing. You can use tags to classify applications and instantly identify application information, for example, an application type, data center location, and technologies used within the application. You can also use tagging to associate archetypes to applications for automatic assessment. For more information about archetypes, see Working with archetypes . Tagging can be done automatically during the analysis manually at any time. Note Not all tags can be assigned automatically. For example, an analysis can only tag the application based on its technologies. If you want to tag the application also with the location of the data center where it is deployed, you need to tag the application manually. 7.7.1. Creating application tags You can create custom tags for applications that MTA assesses or analyzes. Procedure In the Migration view, click Controls . Click the Tags tab. Click Create tag . In the Name field in the opened dialogue, enter a unique name for the tag. Click the Tag category field and select the category tag to associate with the tag. Click Create . Optional: Edit the created tag or tag category: Edit the tag: In the list of tag categories under the Tags tab, open the list of tags in the desired category. Select Edit from the drop-down menu and edit the tag name in the Name field. Click the Tag category field and select the category tag to associate with the tag. Click Save . Edit the tag category: Under the Tags tab, select a defined tag category and click Edit . Edit the tag category's name in the Name field. Edit the category's Rank value. Click the Color field and select a color for the tag category. Click Save . 7.7.2. Manually tagging an application You can tag an application manually, both before or after you run an application analysis. Procedure In the Migration view, click Application inventory . In the row of the required application, click Edit ( ). The Update application window opens. Select the desired tags from the Select a tag(s) drop-down list. Click Save . 7.7.3. Automatic tagging MTA automatically spawns language discovery and technology discovery tasks when adding an application to the Application Inventory . When the language discovery task is running, the technology discovery and analysis tasks wait until the language discovery task is finished. These tasks automatically add tags to the application. MTA can automatically add tags to the application based on the application analysis. Automatic tagging is especially useful when dealing with large portfolios of applications. Automatic tagging of applications based on application analysis is enabled by default. You can disable automatic tagging during application analysis by deselecting the Enable automated tagging checkbox in the Advanced section of the Analysis configuration wizard. Note To tag an application automatically, make sure that the Enable automated tagging checkbox is selected before you run an application analysis. 7.7.4. Displaying application tags You can display the tags attached to a particular application. Note You can display the tags that were attached automatically only after you have run an application analysis. Procedure In the Migration view, click Application inventory . Click the name of the required application. A side pane opens. Click the Tags tab. The tags attached to the application are displayed. 7.8. Working with archetypes An archetype is a group of applications with common characteristics. You can use archetypes to assess multiple applications at once. Application archetypes are defined by criteria tags and the application taxonomy. Each archetype defines how the assessment module assesses the application according to the characteristics defined in that archetype. If the tags of an application match the criteria tags of an archetype, the application is associated with the archetype. Creation of an archetype is defined by a series of tags , stakeholders , and stakeholder groups . The tags include the following types: Criteria tags are tags that the archetype requires to include an application as a member. Note If the archetype criteria tags match an application only partially, this application cannot be a member of the archetype. For example, if the application a only has tag a , but the archetype a criteria tags include tags a AND b , the application a will not be a member of the archetype a . Archetype tags are tags that are applied to the archetype entity. Note All applications associated with the archetype inherit the assessment and review from the archetype groups to which these applications belong. This is the default setting. You can override inheritance for the application by completing an individual assessment and review. 7.8.1. Creating an archetype When you create an archetype, an application in the inventory is automatically associated to that archetype if this application has the tags that match the criteria tags of the archetype. Procedure Open the MTA web console. In the left menu, click Archetypes . Click Create new archetype . In the form that opens, enter the following information for the new archetype: Name : A name of the new archetype (mandatory). Description : A description of the new archetype (optional). Criteria Tags : Tags that associate the assessed applications with the archetype (mandatory). If criteria tags are updated, the process to calculate the applications, which the archetype is associated with, is triggered again. Archetype Tags : Tags that the archetype assesses in the application (mandatory). Stakeholder(s) : Specific stakeholders involved in the application development and migration (optional). Stakeholders Group(s) : Groups of stakeholders involved in the application development and migration (optional). Click Create . 7.8.2. Assessing an archetype An archetype is considered assessed when all required questionnaires have been answered. Note If an application is associated with several archetypes, this application is considered assessed when all associated archetypes have been assessed. Prerequisites You are logged in to an MTA server. Procedure Open the MTA web console. Select the Migration view and click Archetype. Click the Options menu ( ) and select Assess from the drop-down menu. From the list of available questionnaires, click Take to select the desired questionnaire. In the Assessment menu, answer the required questions. Click Save . 7.8.3. Reviewing an archetype An archetype is considered reviewed when it has been reviewed once even if multiple questionnaires have been marked as required. Note If an application is associated with several archetypes, this application is considered reviewed when all associated archetypes have been reviewed. Prerequisites You are logged in to an MTA server. Procedure Open the MTA web console. Select the Migration view and click Archetype . Click the Options menu ( ) and select Review from the drop-down menu.. From the list of available questionnaires, click Take to select the desired assessment questionnaire. In the Assessment menu, answer the required questions. Select Save and Review . You will automatically be redirected to the Review tab. Enter the following information: Proposed Action : Proposed action required to complete the migration or modernization of the archetype. Effort estimate : The level of effort required to perform the modernization or migration of the selected archetype. Business criticality : The level of criticality of the application to the business. Work Priority : The archetype's priority. Click Submit review . 7.8.4. Deleting an archetype Deleting an archetype deletes any associated assessment and review. All associated applications move to the Unassessed and Unreviewed state. 7.9. Analyzing an application You can use the Migration Toolkit for Applications (MTA) user interface to configure and run an application analysis. The analysis determines which specific lines in the application must be modified before the application can be migrated or modernized. 7.9.1. Configuring and running an application analysis You can analyze more than one application at a time against more than one transformation target in the same analysis. Procedure In the Migration view, click Application inventory . Select an application that you want to analyze. Review the credentials assigned to the application. Click Analyze . Select the Analysis mode from the list: Binary Source code Source code and dependencies Upload a local binary. This option only appears if you are analyzing a single application. If you chose this option, you are prompted to Upload a local binary . Either drag a file into the area provided or click Upload and select the file to upload. Click . Select one or more target options for the analysis: Application server migration to either of the following platforms: JBoss EAP 7 JBoss EAP 8 Containerization Quarkus OracleJDK to OpenJDK OpenJDK. Use this option to upgrade to either of the following JDK versions: OpenJDK 11 OpenJDK 17 OpenJDK 21 Linux. Use this option to ensure that there are no Microsoft Windows paths hard-coded into your applications. Jakarta EE 9. Use this option to migrate from Java EE 8. Spring Boot on Red Hat Runtimes Open Liberty Camel. Use this option to migrate from Apache Camel 2 to Apache Camel 3 or from Apache Camel 3 to Apache Camel 4. Azure App Service Click . Select one of the following Scope options to better focus the analysis: Application and internal dependencies only. Application and all dependencies, including known Open Source libraries. Select the list of packages to be analyzed manually. If you choose this option, type the file name and click Add . Exclude packages. If you choose this option, type the name of the package and click Add . Click . In Advanced , you can attach additional custom rules to the analysis by selecting the Manual or Repository mode: In the Manual mode, click Add Rules . Drag the relevant files or select the files from their directory and click Add . In the Repository mode, you can add rule files from a Git or Subversion repository. Important Attaching custom rules is optional if you have already attached a migration target to the analysis. If you have not attached any migration target, you must attach rules. Optional: Set any of the following options: Target Source(s) Excluded rules tags. Rules with these tags are not processed. Add or delete as needed. Enable automated tagging. Select the checkbox to automatically attach tags to the application. This checkbox is selected by default. Note Automatically attached tags are displayed only after you run the analysis. You can attach tags to the application manually instead of enabling automated tagging or in addition to it. Note Analysis engines use standard rules for a comprehensive set of migration targets. However, if the target is not included, is a customized framework, or the application is written in a language that is not supported (for example, Node.js, Python), you can add custom rules by skipping the target selection in the Set Target tab and uploading custom rule files in the Custom Rules tab. Only custom rule files that are uploaded manually are validated. Click . In Review , verify the analysis parameters. Click Run . The analysis status is Scheduled as MTA downloads the image for the container to execute. When the image is downloaded, the status changes to In-progress. Note Analysis takes minutes to hours to run depending on the size of the application and the capacity and resources of the cluster. Tip MTA relies on Kubernetes scheduling capabilities to determine how many analyzer instances are created based on cluster capacity. If several applications are selected for analysis, by default, only one analyzer can be provisioned at a time. With more cluster capacity, more analysis processes can be executed in parallel. Optional: To track the status of your active analysis task, open the Task Manager drawer by clicking the notifications button. Alternatively, hover over the application name to display the pop-over window. When analysis is complete, to see its results, open the application drawer by clicking on the application name. Note After creating an application instance on the Application Inventory page, the language discovery task starts, automatically pre-selecting the target filter option. However, you can choose a different language that you prefer. 7.9.2. Reviewing analysis details You can display the activity log of the analysis. The activity log contains such analysis details as, for example, analysis steps. Procedure In the Migration view, click Application inventory . Click on the application row to open the application drawer. Click the Reports tab. Click View analysis details for the activity log of the analysis. Optional: For issues and dependencies found during the analysis, click the Details tab in the application drawer and click Issues or Dependencies . Alternatively, open the Issues or Dependencies page in the Migration view. 7.9.3. Accessing unmatched rules To access unmatched rules, you must run the analysis with enhanced logging enabled. Navigate to Advanced under Application analysis . Select Options . Check Enhanced advanced analysis details . When you run an analysis: Navigate to Reports in the side drawer. Click View analysis details , which opens the YAML/JSON format log view. Select the issues.yaml file. For each ruleset, there is an unmatched section that lists the rule IDs that do not find match rules. 7.9.4. Downloading an analysis report An MTA analysis report contains a number of sections, including a listing of the technologies used by the application, the dependencies of the application, and the lines of code that must be changed to successfully migrate or modernize the application. For more information about the contents of an MTA analysis report, see Reviewing the reports . For your convenience, you can download analysis reports. Note that by default this option is disabled. Procedure In Administration view, click General . Toggle the Allow reports to be downloaded after running an analysis. switch. Go to the Migration view and click Application inventory . Click on the application row to open the application drawer. Click the Reports tab. Click either the HTML or YAML link: By clicking the HTML link, you download the compressed analysis-report-app-<application_name>.tar file. Extracting this file creates a folder with the same name as the application. By clicking the YAML link, you download the uncompressed analysis-report-app-<application_name>.yaml file. 7.10. Controlling MTA tasks by using Task Manager Task Manager provides precise information about the Migration Toolkit for Applications (MTA) tasks queued for execution. Task Manager handles the following types of tasks: Application analysis Language discovery Technology discovery You can display task-related information either of the following ways: To display active tasks, open the Task Manager drawer by clicking the notifications button. To display all tasks, open the Task Manager page in the Migration view. No multi-user access restrictions on resources There are no multi-user access restrictions on resources. For example, an analyzer task created by a user can be canceled by any other user. 7.10.1. Reviewing a task log To find details and logs of a particular Migration Toolkit for Applications (MTA) task, you can use the Task Manager page. Procedure In the Migration view, click Task Manager . Click the Options menu ( ) for the selected task. Click Task details . Alternatively, click on the task status in the Status column. 7.10.2. Controlling the order of task execution You can use Task Manager to preempt a Migration Toolkit for Applications (MTA) task you have scheduled for execution. Note You can enable Preemption on any scheduled task (not in the status of Running , Succeeded , or Failed ). However, only lower-priority tasks are candidates to be preempted. When a higher-priority task is blocked by lower-priority tasks and has Preemption enabled, the low-priority tasks might be rescheduled so that the blocked higher-priority task might run. Therefore, it is only useful to enable Preemption on higher-priority tasks, for example, application analysis. Procedure In the Migration view, click Task Manager . Click the Options menu ( ) for the selected task. Depending on your scenario, complete one of the following steps: To enable Preemption for the task, select Enable preemption . To disable Preemption for the task with enabled Preemption , select Disable preemption . Revised on 2025-02-26 19:49:52 UTC
[ "name: Legacy Pathfinder description: '' sections: - order: 1 name: Application details questions: - order: 1 text: >- Does the application development team understand and actively develop the application? explanation: >- How much knowledge does the team have about the application's development or usage? answers: - order: 2 text: >- Maintenance mode, no SME knowledge or adequate documentation available risk: red rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Little knowledge, no development (example: third-party or commercial off-the-shelf application) risk: red rationale: '' mitigation: '' - order: 3 text: Maintenance mode, SME knowledge is available risk: yellow rationale: '' mitigation: '' - order: 4 text: Actively developed, SME knowledge is available risk: green rationale: '' mitigation: '' - order: 5 text: greenfield application risk: green rationale: '' mitigation: '' - order: 2 text: How is the application supported in production? explanation: >- Does the team have sufficient knowledge to support the application in production? answers: - order: 3 text: >- Multiple teams provide support using an established escalation model risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- External support provider with a ticket-driven escalation process; no inhouse support resources risk: red rationale: '' mitigation: '' - order: 2 text: >- Separate internal support team, separate from the development team, with little interaction between the teams risk: red rationale: '' mitigation: '' - order: 4 text: >- SRE (Site Reliability Engineering) approach with a knowledgeable and experienced operations team risk: green rationale: '' mitigation: '' - order: 5 text: >- DevOps approach with the same team building the application and supporting it in production risk: green rationale: '' mitigation: '' - order: 3 text: >- How much time passes from when code is committed until the application is deployed to production? explanation: What is the development latency? answers: - order: 3 text: 2-6 months risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Not tracked risk: red rationale: '' mitigation: '' - order: 2 text: More than 6 months risk: red rationale: '' mitigation: '' - order: 4 text: 8-30 days risk: green rationale: '' mitigation: '' - order: 5 text: 1-7 days risk: green rationale: '' mitigation: '' - order: 6 text: Less than 1 day risk: green rationale: '' mitigation: '' - order: 4 text: How often is the application deployed to production? explanation: Deployment frequency answers: - order: 3 text: Between once a month and once every 6 months risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Not tracked risk: red rationale: '' mitigation: '' - order: 2 text: Less than once every 6 months risk: red rationale: '' mitigation: '' - order: 4 text: Weekly risk: green rationale: '' mitigation: '' - order: 5 text: Daily risk: green rationale: '' mitigation: '' - order: 6 text: Several times a day risk: green rationale: '' mitigation: '' - order: 5 text: >- What is the application's mean time to recover (MTTR) from failure in a production environment? explanation: Average time for the application to recover from failure answers: - order: 5 text: Less than 1 hour risk: green rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Not tracked risk: red rationale: '' mitigation: '' - order: 3 text: 1-7 days risk: yellow rationale: '' mitigation: '' - order: 2 text: 1 month or more risk: red rationale: '' mitigation: '' - order: 4 text: 1-24 hours risk: green rationale: '' mitigation: '' - order: 6 text: Does the application have legal and/or licensing requirements? explanation: >- Legal and licensing requirements must be assessed to determine their possible impact (cost, fault reporting) on the container platform hosting the application. Examples of legal requirements: isolated clusters, certifications, compliance with the Payment Card Industry Data Security Standard or the Health Insurance Portability and Accountability Act. Examples of licensing requirements: per server, per CPU. answers: - order: 1 text: Multiple legal and licensing requirements risk: red rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 2 text: 'Licensing requirements (examples: per server, per CPU)' risk: red rationale: '' mitigation: '' - order: 3 text: >- Legal requirements (examples: cluster isolation, hardware, PCI or HIPAA compliance) risk: yellow rationale: '' mitigation: '' - order: 4 text: None risk: green rationale: '' mitigation: '' - order: 7 text: Which model best describes the application architecture? explanation: Describe the application architecture in simple terms. answers: - order: 3 text: >- Complex monolith, strict runtime dependency startup order, non-resilient architecture risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 5 text: Independently deployable components risk: green rationale: '' mitigation: '' - order: 1 text: >- Massive monolith (high memory and CPU usage), singleton deployment, vertical scale only risk: yellow rationale: '' mitigation: '' - order: 2 text: >- Massive monolith (high memory and CPU usage), non-singleton deployment, complex to scale horizontally risk: yellow rationale: '' mitigation: '' - order: 4 text: 'Resilient monolith (examples: retries, circuit breakers)' risk: green rationale: '' mitigation: '' - order: 2 name: Application dependencies questions: - order: 1 text: Does the application require specific hardware? explanation: >- OpenShift Container Platform runs only on x86, IBM Power, or IBM Z systems answers: - order: 3 text: 'Requires specific computer hardware (examples: GPUs, RAM, HDDs)' risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Requires CPU that is not supported by red Hat risk: red rationale: '' mitigation: '' - order: 2 text: 'Requires custom or legacy hardware (example: USB device)' risk: red rationale: '' mitigation: '' - order: 4 text: Requires CPU that is supported by red Hat risk: green rationale: '' mitigation: '' - order: 2 text: What operating system does the application require? explanation: >- Only Linux and certain Microsoft Windows versions are supported in containers. Check the latest versions and requirements. answers: - order: 4 text: Microsoft Windows risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Operating system that is not compatible with OpenShift Container Platform (examples: OS X, AIX, Unix, Solaris) risk: red rationale: '' mitigation: '' - order: 2 text: Linux with custom kernel drivers or a specific kernel version risk: red rationale: '' mitigation: '' - order: 3 text: 'Linux with custom capabilities (examples: seccomp, root access)' risk: yellow rationale: '' mitigation: '' - order: 5 text: Standard Linux distribution risk: green rationale: '' mitigation: '' - order: 3 text: >- Does the vendor provide support for a third-party component running in a container? explanation: Will the vendor support a component if you run it in a container? answers: - order: 2 text: No vendor support for containers risk: red rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Not recommended to run the component in a container risk: red rationale: '' mitigation: '' - order: 3 text: >- Vendor supports containers but with limitations (examples: functionality is restricted, component has not been tested) risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Vendor supports their application running in containers but you must build your own images risk: yellow rationale: '' mitigation: '' - order: 5 text: Vendor fully supports containers, provides certified images risk: green rationale: '' mitigation: '' - order: 6 text: No third-party components required risk: green rationale: '' mitigation: '' - order: 4 text: Incoming/northbound dependencies explanation: Systems or applications that call the application answers: - order: 3 text: >- Many dependencies exist, can be changed because the systems are internally managed risk: green rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 4 text: Internal dependencies only risk: green rationale: '' mitigation: '' - order: 1 text: >- Dependencies are difficult or expensive to change because they are legacy or third-party risk: red rationale: '' mitigation: '' - order: 2 text: >- Many dependencies exist, can be changed but the process is expensive and time-consuming risk: yellow rationale: '' mitigation: '' - order: 5 text: No incoming/northbound dependencies risk: green rationale: '' mitigation: '' - order: 5 text: Outgoing/southbound dependencies explanation: Systems or applications that the application calls answers: - order: 3 text: Application not ready until dependencies are verified available risk: yellow rationale: '' mitigation: '' - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Dependency availability only verified when application is processing traffic risk: red rationale: '' mitigation: '' - order: 2 text: Dependencies require a complex and strict startup order risk: yellow rationale: '' mitigation: '' - order: 4 text: Limited processing available if dependencies are unavailable risk: green rationale: '' mitigation: '' - order: 5 text: No outgoing/southbound dependencies risk: green rationale: '' mitigation: '' - order: 3 name: Application architecture questions: - order: 1 text: >- How resilient is the application? How well does it recover from outages and restarts? explanation: >- If the application or one of its dependencies fails, how does the application recover from failure? Is manual intervention required? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Application cannot be restarted cleanly after failure, requires manual intervention risk: red rationale: '' mitigation: '' - order: 2 text: >- Application fails when a soutbound dependency is unavailable and does not recover automatically risk: red rationale: '' mitigation: '' - order: 3 text: >- Application functionality is limited when a dependency is unavailable but recovers when the dependency is available risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Application employs resilient architecture patterns (examples: circuit breakers, retry mechanisms) risk: green rationale: '' mitigation: '' - order: 5 text: >- Application containers are randomly terminated to test resiliency; chaos engineering principles are followed risk: green rationale: '' mitigation: '' - order: 2 text: How does the external world communicate with the application? explanation: >- What protocols do external clients use to communicate with the application? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: 'Non-TCP/IP protocols (examples: serial, IPX, AppleTalk)' risk: red rationale: '' mitigation: '' - order: 2 text: TCP/IP, with host name or IP address encapsulated in the payload risk: red rationale: '' mitigation: '' - order: 3 text: 'TCP/UDP without host addressing (example: SSH)' risk: yellow rationale: '' mitigation: '' - order: 4 text: TCP/UDP encapsulated, using TLS with SNI header risk: green rationale: '' mitigation: '' - order: 5 text: HTTP/HTTPS risk: green rationale: '' mitigation: '' - order: 3 text: How does the application manage its internal state? explanation: >- If the application must manage or retain an internal state, how is this done? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 3 text: State maintained in non-shared, non-ephemeral storage risk: yellow rationale: '' mitigation: '' - order: 1 text: Application components use shared memory within a pod risk: yellow rationale: '' mitigation: '' - order: 2 text: >- State is managed externally by another product (examples: Zookeeper or red Hat Data Grid) risk: yellow rationale: '' mitigation: '' - order: 4 text: Disk shared between application instances risk: green rationale: '' mitigation: '' - order: 5 text: Stateless or ephemeral container storage risk: green rationale: '' mitigation: '' - order: 4 text: How does the application handle service discovery? explanation: How does the application discover services? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Uses technologies that are not compatible with Kubernetes (examples: hardcoded IP addresses, custom cluster manager) risk: red rationale: '' mitigation: '' - order: 2 text: >- Requires an application or cluster restart to discover new service instances risk: red rationale: '' mitigation: '' - order: 3 text: >- Uses technologies that are compatible with Kubernetes but require specific libraries or services (examples: HashiCorp Consul, Netflix Eureka) risk: yellow rationale: '' mitigation: '' - order: 4 text: Uses Kubernetes DNS name resolution risk: green rationale: '' mitigation: '' - order: 5 text: Does not require service discovery risk: green rationale: '' mitigation: '' - order: 5 text: How is the application clustering managed? explanation: >- Does the application require clusters? If so, how is clustering managed? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: 'Manually configured clustering (example: static clusters)' risk: red rationale: '' mitigation: '' - order: 2 text: Managed by an external off-PaaS cluster manager risk: red rationale: '' mitigation: '' - order: 3 text: >- Managed by an application runtime that is compatible with Kubernetes risk: green rationale: '' mitigation: '' - order: 4 text: No cluster management required risk: green rationale: '' mitigation: '' - order: 4 name: Application observability questions: - order: 1 text: How does the application use logging and how are the logs accessed? explanation: How the application logs are accessed answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Logs are unavailable or are internal with no way to export them risk: red rationale: '' mitigation: '' - order: 2 text: >- Logs are in a custom binary format, exposed with non-standard protocols risk: red rationale: '' mitigation: '' - order: 3 text: Logs are exposed using syslog risk: yellow rationale: '' mitigation: '' - order: 4 text: Logs are written to a file system, sometimes as multiple files risk: yellow rationale: '' mitigation: '' - order: 5 text: 'Logs are forwarded to an external logging system (example: Splunk)' risk: green rationale: '' mitigation: '' - order: 6 text: 'Logs are configurable (example: can be sent to stdout)' risk: green rationale: '' mitigation: '' - order: 2 text: Does the application provide metrics? explanation: >- Are application metrics available, if necessary (example: OpenShift Container Platform collects CPU and memory metrics)? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: No metrics available risk: yellow rationale: '' mitigation: '' - order: 2 text: Metrics collected but not exposed externally risk: yellow rationale: '' mitigation: '' - order: 3 text: 'Metrics exposed using binary protocols (examples: SNMP, JMX)' risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Metrics exposed using a third-party solution (examples: Dynatrace, AppDynamics) risk: green rationale: '' mitigation: '' - order: 5 text: >- Metrics collected and exposed with built-in Prometheus endpoint support risk: green rationale: '' mitigation: '' - order: 3 text: >- How easy is it to determine the application's health and readiness to handle traffic? explanation: >- How do we determine an application's health (liveness) and readiness to handle traffic? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: No health or readiness query functionality available risk: red rationale: '' mitigation: '' - order: 3 text: Basic application health requires semi-complex scripting risk: yellow rationale: '' mitigation: '' - order: 4 text: Dedicated, independent liveness and readiness endpoints risk: green rationale: '' mitigation: '' - order: 2 text: Monitored and managed by a custom watchdog process risk: red rationale: '' mitigation: '' - order: 5 text: Health is verified by probes running synthetic transactions risk: green rationale: '' mitigation: '' - order: 4 text: What best describes the application's runtime characteristics? explanation: >- How would the profile of an application appear during runtime (examples: graphs showing CPU and memory usage, traffic patterns, latency)? What are the implications for a serverless application? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Deterministic and predictable real-time execution or control requirements risk: red rationale: '' mitigation: '' - order: 2 text: >- Sensitive to latency (examples: voice applications, high frequency trading applications) risk: yellow rationale: '' mitigation: '' - order: 3 text: Constant traffic with a broad range of CPU and memory usage risk: yellow rationale: '' mitigation: '' - order: 4 text: Intermittent traffic with predictable CPU and memory usage risk: green rationale: '' mitigation: '' - order: 5 text: Constant traffic with predictable CPU and memory usage risk: green rationale: '' mitigation: '' - order: 5 text: How long does it take the application to be ready to handle traffic? explanation: How long the application takes to boot answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: More than 5 minutes risk: red rationale: '' mitigation: '' - order: 2 text: 2-5 minutes risk: yellow rationale: '' mitigation: '' - order: 3 text: 1-2 minutes risk: yellow rationale: '' mitigation: '' - order: 4 text: 10-60 seconds risk: green rationale: '' mitigation: '' - order: 5 text: Less than 10 seconds risk: green rationale: '' mitigation: '' - order: 5 name: Application cross-cutting concerns questions: - order: 1 text: How is the application tested? explanation: >- Is the application is tested? Is it easy to test (example: automated testing)? Is it tested in production? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: No testing or minimal manual testing only risk: red rationale: '' mitigation: '' - order: 2 text: Minimal automated testing, focused on the user interface risk: yellow rationale: '' mitigation: '' - order: 3 text: >- Some automated unit and regression testing, basic CI/CD pipeline testing; modern test practices are not followed risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Highly repeatable automated testing (examples: unit, integration, smoke tests) before deploying to production; modern test practices are followed risk: green rationale: '' mitigation: '' - order: 5 text: >- Chaos engineering approach, constant testing in production (example: A/B testing + experimentation) risk: green rationale: '' mitigation: '' - order: 2 text: How is the application configured? explanation: >- How is the application configured? Is the configuration method appropriate for a container? External servers are runtime dependencies. answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: >- Configuration files compiled during installation and configured using a user interface risk: red rationale: '' mitigation: '' - order: 2 text: >- Configuration files are stored externally (example: in a database) and accessed using specific environment keys (examples: host name, IP address) risk: red rationale: '' mitigation: '' - order: 3 text: Multiple configuration files in multiple file system locations risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Configuration files built into the application and enabled using system properties at runtime risk: yellow rationale: '' mitigation: '' - order: 5 text: >- Configuration retrieved from an external server (examples: Spring Cloud Config Server, HashiCorp Consul) risk: yellow rationale: '' mitigation: '' - order: 6 text: >- Configuration loaded from files in a single configurable location; environment variables used risk: green rationale: '' mitigation: '' - order: 4 text: How is the application deployed? explanation: >- How the application is deployed and whether the deployment process is suitable for a container platform answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 3 text: Simple automated deployment scripts risk: yellow rationale: '' mitigation: '' - order: 1 text: Manual deployment using a user interface risk: red rationale: '' mitigation: '' - order: 2 text: Manual deployment with some automation risk: red rationale: '' mitigation: '' - order: 4 text: >- Automated deployment with manual intervention or complex promotion through pipeline stages risk: yellow rationale: '' mitigation: '' - order: 5 text: >- Automated deployment with a full CI/CD pipeline, minimal intervention for promotion through pipeline stages risk: green rationale: '' mitigation: '' - order: 6 text: Fully automated (GitOps), blue-green, or canary deployment risk: green rationale: '' mitigation: '' - order: 5 text: Where is the application deployed? explanation: Where does the application run? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Bare metal server risk: green rationale: '' mitigation: '' - order: 2 text: 'Virtual machine (examples: red Hat Virtualization, VMware)' risk: green rationale: '' mitigation: '' - order: 3 text: 'Private cloud (example: red Hat OpenStack Platform)' risk: green rationale: '' mitigation: '' - order: 4 text: >- Public cloud provider (examples: Amazon Web Services, Microsoft Azure, Google Cloud Platform) risk: green rationale: '' mitigation: '' - order: 5 text: >- Platform as a service (examples: Heroku, Force.com, Google App Engine) risk: yellow rationale: '' mitigation: '' - order: 7 text: Other. Specify in the comments field risk: yellow rationale: '' mitigation: '' - order: 6 text: Hybrid cloud (public and private cloud providers) risk: green rationale: '' mitigation: '' - order: 6 text: How mature is the containerization process, if any? explanation: If the team has used containers in the past, how was it done? answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Application runs in a container on a laptop or desktop risk: red rationale: '' mitigation: '' - order: 3 text: Some experience with containers but not yet fully defined risk: yellow rationale: '' mitigation: '' - order: 4 text: >- Proficient with containers and container platforms (examples: Swarm, Kubernetes) risk: green rationale: '' mitigation: '' - order: 5 text: Application containerization has not yet been attempted risk: green rationale: '' mitigation: '' - order: 3 text: How does the application acquire security keys or certificates? explanation: >- How does the application retrieve credentials, keys, or certificates? External systems are runtime dependencies. answers: - order: 0 text: unknown risk: unknown rationale: '' mitigation: '' - order: 1 text: Hardware security modules or encryption devices risk: red rationale: '' mitigation: '' - order: 2 text: >- Keys/certificates bound to IP addresses and generated at runtime for each application instance risk: red rationale: '' mitigation: '' - order: 3 text: Keys/certificates compiled into the application risk: yellow rationale: '' mitigation: '' - order: 4 text: Loaded from a shared disk risk: yellow rationale: '' mitigation: '' - order: 5 text: >- Retrieved from an external server (examples: HashiCorp Vault, CyberArk Conjur) risk: yellow rationale: '' mitigation: '' - order: 6 text: Loaded from files risk: green rationale: '' mitigation: '' - order: 7 text: Not required risk: green rationale: '' mitigation: '' thresholds: red: 5 yellow: 30 unknown: 5 riskMessages: red: '' yellow: '' green: '' unknown: '' builtin: true", "questions: - order: 1 text: What is the main JAVA framework used in your application? explanation: Identify the primary JAVA framework used in your application. includeFor: - category: Language tag: Java", "questions: - order: 4 text: Are you currently using any form of container orchestration? explanation: Determine if the application utilizes container orchestration tools like Kubernetes, Docker Swarm, etc. excludeFor: - category: Deployment tag: Serverless - category: Architecture tag: Monolith", "text: What is the main technology in your application? explanation: Identify the main framework or technology used in your application. answers: - order: 1 text: Quarkus risk: green autoAnswerFor: - category: Runtime tag: Quarkus - order: 2 text: Spring Boot risk: green autoAnswerFor: - category: Runtime tag: Spring Boot", "questions: - order: 1 text: What is the main technology in your application? explanation: Identify the main framework or technology used in your application. answers: - order: 1 text: Quarkus risk: green applyTags: - category: Runtime tag: Quarkus - order: 2 text: Spring Boot risk: green applyTags: - category: Runtime tag: Spring Boot", "name: Uploadable Cloud Readiness Questionnaire Template description: This questionnaire is an example template for assessing cloud readiness. It serves as a guide for users to create and customize their own questionnaire templates. required: true sections: - order: 1 name: Application Technologies questions: - order: 1 text: What is the main technology in your application? explanation: Identify the main framework or technology used in your application. includeFor: - category: Language tag: Java answers: - order: 1 text: Quarkus risk: green rationale: Quarkus is a modern, container-friendly framework. mitigation: No mitigation needed. applyTags: - category: Runtime tag: Quarkus autoAnswerFor: - category: Runtime tag: Quarkus - order: 2 text: Spring Boot risk: green rationale: Spring Boot is versatile and widely used. mitigation: Ensure container compatibility. applyTags: - category: Runtime tag: Spring Boot autoAnswerFor: - category: Runtime tag: Spring Boot - order: 3 text: Legacy Monolithic Application risk: red rationale: Legacy monoliths are challenging for cloud adaptation. mitigation: Consider refactoring into microservices. - order: 2 text: Does your application use a microservices architecture? explanation: Assess if the application is built using a microservices architecture. answers: - order: 1 text: Yes risk: green rationale: Microservices are well-suited for cloud environments. mitigation: Continue monitoring service dependencies. - order: 2 text: No risk: yellow rationale: Non-microservices architectures may face scalability issues. mitigation: Assess the feasibility of transitioning to microservices. - order: 3 text: Unknown risk: unknown rationale: Lack of clarity on architecture can lead to unplanned issues. mitigation: Conduct an architectural review. - order: 3 text: Is your application's data storage cloud-optimized? explanation: Evaluate if the data storage solution is optimized for cloud usage. includeFor: - category: Language tag: Java answers: - order: 1 text: Cloud-Native Storage Solution risk: green rationale: Cloud-native solutions offer scalability and resilience. mitigation: Ensure regular backups and disaster recovery plans. - order: 2 text: Traditional On-Premises Storage risk: red rationale: Traditional storage might not scale well in the cloud. mitigation: Explore cloud-based storage solutions. - order: 3 text: Hybrid Storage Approach risk: yellow rationale: Hybrid solutions may have integration complexities. mitigation: Evaluate and optimize cloud integration points. thresholds: red: 1 yellow: 30 unknown: 15 riskMessages: red: Requires deep changes in architecture or lifecycle yellow: Cloud friendly but needs minor changes green: Cloud Native unknown: More information needed", "name: Testing thresholds: red: 30 yellow: 45 unknown: 5" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/user_interface_guide/assessing-and-analyzing-applications
Chapter 3. Red Hat Enterprise Linux 7
Chapter 3. Red Hat Enterprise Linux 7 This section outlines the packages released for Red Hat Enterprise Linux 7. 3.1. Red Hat Satellite Client 6 (for RHEL 7 Server) (RPMs) The following table outlines the packages included in the rhel-7-server-satellite-client-6-rpms repository. Table 3.1. Red Hat Satellite Client 6 (for RHEL 7 Server) (RPMs) Name Version Advisory gofer 2.12.5-7.el7sat RHBA-2022:96562 katello-agent 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-fact-plugin 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el7sat RHBA-2022:96562 puppet-agent 7.16.0-2.el7sat RHBA-2022:96562 python-argcomplete 1.7.0-2.el7sat RHBA-2022:96562 python-gofer 2.12.5-7.el7sat RHBA-2022:96562 python-gofer-proton 2.12.5-7.el7sat RHBA-2022:96562 python-qpid-proton 0.33.0-6.el7_9 RHBA-2022:96562 python2-beautifulsoup4 4.6.3-2.el7sat RHBA-2022:96562 python2-future 0.16.0-11.el7sat RHBA-2022:96562 python2-psutil 5.7.2-2.el7sat RHBA-2022:96562 python2-tracer 0.7.3-1.el7sat RHBA-2022:96562 qpid-proton-c 0.33.0-6.el7_9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el7sat RHBA-2022:96562 tracer-common 0.7.3-1.el7sat RHBA-2022:96562 3.2. Red Hat Satellite Client 6 (for RHEL 7 Workstation) (RPMs) The following table outlines the packages included in the rhel-7-workstation-satellite-client-6-rpms repository. Table 3.2. Red Hat Satellite Client 6 (for RHEL 7 Workstation) (RPMs) Name Version Advisory gofer 2.12.5-7.el7sat RHBA-2022:96562 katello-agent 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-fact-plugin 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el7sat RHBA-2022:96562 puppet-agent 7.16.0-2.el7sat RHBA-2022:96562 python-argcomplete 1.7.0-2.el7sat RHBA-2022:96562 python-gofer 2.12.5-7.el7sat RHBA-2022:96562 python-gofer-proton 2.12.5-7.el7sat RHBA-2022:96562 python-qpid-proton 0.33.0-6.el7_9 RHBA-2022:96562 python2-beautifulsoup4 4.6.3-2.el7sat RHBA-2022:96562 python2-future 0.16.0-11.el7sat RHBA-2022:96562 python2-psutil 5.7.2-2.el7sat RHBA-2022:96562 python2-tracer 0.7.3-1.el7sat RHBA-2022:96562 qpid-proton-c 0.33.0-6.el7_9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el7sat RHBA-2022:96562 tracer-common 0.7.3-1.el7sat RHBA-2022:96562 3.3. Red Hat Satellite Client 6 (for RHEL 7 Desktop) (RPMs) The following table outlines the packages included in the rhel-7-desktop-satellite-client-6-rpms repository. Table 3.3. Red Hat Satellite Client 6 (for RHEL 7 Desktop) (RPMs) Name Version Advisory gofer 2.12.5-7.el7sat RHBA-2022:96562 katello-agent 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-fact-plugin 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el7sat RHBA-2022:96562 puppet-agent 7.16.0-2.el7sat RHBA-2022:96562 python-argcomplete 1.7.0-2.el7sat RHBA-2022:96562 python-gofer 2.12.5-7.el7sat RHBA-2022:96562 python-gofer-proton 2.12.5-7.el7sat RHBA-2022:96562 python-qpid-proton 0.33.0-6.el7_9 RHBA-2022:96562 python2-beautifulsoup4 4.6.3-2.el7sat RHBA-2022:96562 python2-future 0.16.0-11.el7sat RHBA-2022:96562 python2-psutil 5.7.2-2.el7sat RHBA-2022:96562 python2-tracer 0.7.3-1.el7sat RHBA-2022:96562 qpid-proton-c 0.33.0-6.el7_9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el7sat RHBA-2022:96562 tracer-common 0.7.3-1.el7sat RHBA-2022:96562 3.4. Red Hat Satellite Client 6 (for RHEL 7 for Scientific Computing) (RPMs) The following table outlines the packages included in the rhel-7-for-hpc-node-satellite-client-6-rpms repository. Table 3.4. Red Hat Satellite Client 6 (for RHEL 7 for Scientific Computing) (RPMs) Name Version Advisory gofer 2.12.5-7.el7sat RHBA-2022:96562 katello-agent 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-fact-plugin 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el7sat RHBA-2022:96562 puppet-agent 7.16.0-2.el7sat RHBA-2022:96562 python-argcomplete 1.7.0-2.el7sat RHBA-2022:96562 python-gofer 2.12.5-7.el7sat RHBA-2022:96562 python-gofer-proton 2.12.5-7.el7sat RHBA-2022:96562 python-qpid-proton 0.33.0-6.el7_9 RHBA-2022:96562 python2-beautifulsoup4 4.6.3-2.el7sat RHBA-2022:96562 python2-future 0.16.0-11.el7sat RHBA-2022:96562 python2-psutil 5.7.2-2.el7sat RHBA-2022:96562 python2-tracer 0.7.3-1.el7sat RHBA-2022:96562 qpid-proton-c 0.33.0-6.el7_9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el7sat RHBA-2022:96562 tracer-common 0.7.3-1.el7sat RHBA-2022:96562 3.5. Red Hat Satellite Client 6 (for RHEL 7 for IBM Power LE) (RPMs) The following table outlines the packages included in the rhel-7-for-power-le-satellite-client-6-rpms repository. Table 3.5. Red Hat Satellite Client 6 (for RHEL 7 for IBM Power LE) (RPMs) Name Version Advisory gofer 2.12.5-7.el7sat RHBA-2022:96562 katello-agent 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-fact-plugin 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el7sat RHBA-2022:96562 python-argcomplete 1.7.0-2.el7sat RHBA-2022:96562 python-gofer 2.12.5-7.el7sat RHBA-2022:96562 python-gofer-proton 2.12.5-7.el7sat RHBA-2022:96562 python-qpid-proton 0.33.0-6.el7_9 RHBA-2022:96562 python2-beautifulsoup4 4.6.3-2.el7sat RHBA-2022:96562 python2-future 0.16.0-11.el7sat RHBA-2022:96562 python2-psutil 5.7.2-2.el7sat RHBA-2022:96562 python2-tracer 0.7.3-1.el7sat RHBA-2022:96562 qpid-proton-c 0.33.0-6.el7_9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el7sat RHBA-2022:96562 tracer-common 0.7.3-1.el7sat RHBA-2022:96562 3.6. Red Hat Satellite Client 6 (for RHEL 7 for IBM Power) (RPMs) The following table outlines the packages included in the rhel-7-for-power-satellite-client-6-rpms repository. Table 3.6. Red Hat Satellite Client 6 (for RHEL 7 for IBM Power) (RPMs) Name Version Advisory gofer 2.12.5-7.el7sat RHBA-2022:96562 katello-agent 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-fact-plugin 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el7sat RHBA-2022:96562 python-argcomplete 1.7.0-2.el7sat RHBA-2022:96562 python-gofer 2.12.5-7.el7sat RHBA-2022:96562 python-gofer-proton 2.12.5-7.el7sat RHBA-2022:96562 python-qpid-proton 0.33.0-6.el7_9 RHBA-2022:96562 python2-beautifulsoup4 4.6.3-2.el7sat RHBA-2022:96562 python2-future 0.16.0-11.el7sat RHBA-2022:96562 python2-psutil 5.7.2-2.el7sat RHBA-2022:96562 python2-tracer 0.7.3-1.el7sat RHBA-2022:96562 qpid-proton-c 0.33.0-6.el7_9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el7sat RHBA-2022:96562 tracer-common 0.7.3-1.el7sat RHBA-2022:96562 3.7. Red Hat Satellite Client 6 (for RHEL 7 for System Z) (RPMs) The following table outlines the packages included in the rhel-7-for-system-z-satellite-client-6-rpms repository. Table 3.7. Red Hat Satellite Client 6 (for RHEL 7 for System Z) (RPMs) Name Version Advisory gofer 2.12.5-7.el7sat RHBA-2022:96562 katello-agent 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-fact-plugin 3.5.7-3.el7sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el7sat RHBA-2022:96562 python-argcomplete 1.7.0-2.el7sat RHBA-2022:96562 python-gofer 2.12.5-7.el7sat RHBA-2022:96562 python-gofer-proton 2.12.5-7.el7sat RHBA-2022:96562 python-qpid-proton 0.33.0-6.el7_9 RHBA-2022:96562 python2-beautifulsoup4 4.6.3-2.el7sat RHBA-2022:96562 python2-future 0.16.0-11.el7sat RHBA-2022:96562 python2-psutil 5.7.2-2.el7sat RHBA-2022:96562 python2-tracer 0.7.3-1.el7sat RHBA-2022:96562 qpid-proton-c 0.33.0-6.el7_9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el7sat RHBA-2022:96562 tracer-common 0.7.3-1.el7sat RHBA-2022:96562
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/package_manifest/sat-6-15-rhel7
23.12. CPU Models and Topology
23.12. CPU Models and Topology This section covers the requirements for CPU models. Note that every hypervisor has its own policy for which CPU features guest will see by default. The set of CPU features presented to the guest by KVM depends on the CPU model chosen in the guest virtual machine configuration. qemu32 and qemu64 are basic CPU models, but there are other models (with additional features) available. Each model and its topology is specified using the following elements from the domain XML: <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='disable' name='lahf_lm'/> </cpu> Figure 23.14. CPU model and topology example 1 <cpu mode='host-model'> <model fallback='forbid'/> <topology sockets='1' cores='2' threads='1'/> </cpu> Figure 23.15. CPU model and topology example 2 <cpu mode='host-passthrough'/> Figure 23.16. CPU model and topology example 3 In cases where no restrictions are to be put on the CPU model or its features, a simpler <cpu> element such as the following may be used: <cpu> <topology sockets='1' cores='2' threads='1'/> </cpu> Figure 23.17. CPU model and topology example 4 <cpu mode='custom'> <model>POWER8</model> </cpu> Figure 23.18. PPC64/PSeries CPU model example <cpu mode='host-passthrough'/> Figure 23.19. aarch64/virt CPU model example The components of this section of the domain XML are as follows: Table 23.8. CPU model and topology elements Element Description <cpu> This is the main container for describing guest virtual machine CPU requirements. <match> Specifies how the virtual CPU is provided to the guest virtual machine must match these requirements. The match attribute can be omitted if topology is the only element within <cpu> . Possible values for the match attribute are: minimum - the specified CPU model and features describes the minimum requested CPU. exact - the virtual CPU provided to the guest virtual machine will exactly match the specification. strict - the guest virtual machine will not be created unless the host physical machine CPU exactly matches the specification. Note that the match attribute can be omitted and will default to exact . <mode> This optional attribute may be used to make it easier to configure a guest virtual machine CPU to be as close to the host physical machine CPU as possible. Possible values for the mode attribute are: custom - Describes how the CPU is presented to the guest virtual machine. This is the default setting when the mode attribute is not specified. This mode makes it so that a persistent guest virtual machine will see the same hardware no matter what host physical machine the guest virtual machine is booted on. host-model - A shortcut to copying host physical machine CPU definition from the capabilities XML into the domain XML. As the CPU definition is copied just before starting a domain, the same XML can be used on different host physical machines while still providing the best guest virtual machine CPU each host physical machine supports. The match attribute and any feature elements cannot be used in this mode. For more information, see the libvirt upstream website . host-passthrough With this mode, the CPU visible to the guest virtual machine is exactly the same as the host physical machine CPU, including elements that cause errors within libvirt. The obvious the downside of this mode is that the guest virtual machine environment cannot be reproduced on different hardware and therefore, this mode is recommended with great caution. The model and feature elements are not allowed in this mode. <model> Specifies the CPU model requested by the guest virtual machine. The list of available CPU models and their definition can be found in the cpu_map.xml file installed in libvirt's data directory. If a hypervisor is unable to use the exact CPU model, libvirt automatically falls back to a closest model supported by the hypervisor while maintaining the list of CPU features. An optional fallback attribute can be used to forbid this behavior, in which case an attempt to start a domain requesting an unsupported CPU model will fail. Supported values for fallback attribute are: allow (the default), and forbid . The optional vendor_id attribute can be used to set the vendor ID seen by the guest virtual machine. It must be exactly 12 characters long. If not set, the vendor iID of the host physical machine is used. Typical possible values are AuthenticAMD and GenuineIntel . <vendor> Specifies the CPU vendor requested by the guest virtual machine. If this element is missing, the guest virtual machine runs on a CPU matching given features regardless of its vendor. The list of supported vendors can be found in cpu_map.xml . <topology> Specifies the requested topology of the virtual CPU provided to the guest virtual machine. Three non-zero values must be given for sockets, cores, and threads: the total number of CPU sockets, number of cores per socket, and number of threads per core, respectively. <feature> Can contain zero or more elements used to fine-tune features provided by the selected CPU model. The list of known feature names can be found in the cpu_map.xml file. The meaning of each feature element depends on its policy attribute, which has to be set to one of the following values: force - forces the virtual to be supported, regardless of whether it is actually supported by host physical machine CPU. require - dictates that guest virtual machine creation will fail unless the feature is supported by host physical machine CPU. This is the default setting, optional - this feature is supported by virtual CPU but only if it is supported by host physical machine CPU. disable - this is not supported by virtual CPU. forbid - guest virtual machine creation will fail if the feature is supported by host physical machine CPU. 23.12.1. Changing the Feature Set for a Specified CPU Although CPU models have an inherent feature set, the individual feature components can either be allowed or forbidden on a feature by feature basis, allowing for a more individualized configuration for the CPU. Procedure 23.1. Enabling and disabling CPU features To begin, shut down the guest virtual machine. Open the guest virtual machine's configuration file by running the virsh edit [domain] command. Change the parameters within the <feature> or <model> to include the attribute value 'allow' to force the feature to be allowed, or 'forbid' to deny support for the feature. <!-- original feature set --> <cpu mode='host-model'> <model fallback='allow'/> <topology sockets='1' cores='2' threads='1'/> </cpu> <!--changed feature set--> <cpu mode='host-model'> <model fallback='forbid'/> <topology sockets='1' cores='2' threads='1'/> </cpu> Figure 23.20. Example for enabling or disabling CPU features <!--original feature set--> <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='disable' name='lahf_lm'/> </cpu> <!--changed feature set--> <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='enable' name='lahf_lm'/> </cpu> Figure 23.21. Example 2 for enabling or disabling CPU features When you have completed the changes, save the configuration file and start the guest virtual machine. 23.12.2. Guest Virtual Machine NUMA Topology Guest virtual machine NUMA topology can be specified using the <numa> element in the domain XML: <cpu> <numa> <cell cpus='0-3' memory='512000'/> <cell cpus='4-7' memory='512000'/> </numa> </cpu> ... Figure 23.22. Guest virtual machine NUMA topology Each cell element specifies a NUMA cell or a NUMA node. cpus specifies the CPU or range of CPUs that are part of the node. memory specifies the node memory in kibibytes (blocks of 1024 bytes). Each cell or node is assigned a cellid or nodeid in increasing order starting from 0.
[ "<cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='disable' name='lahf_lm'/> </cpu>", "<cpu mode='host-model'> <model fallback='forbid'/> <topology sockets='1' cores='2' threads='1'/> </cpu>", "<cpu mode='host-passthrough'/>", "<cpu> <topology sockets='1' cores='2' threads='1'/> </cpu>", "<cpu mode='custom'> <model>POWER8</model> </cpu>", "<cpu mode='host-passthrough'/>", "<!-- original feature set --> <cpu mode='host-model'> <model fallback='allow'/> <topology sockets='1' cores='2' threads='1'/> </cpu> <!--changed feature set--> <cpu mode='host-model'> <model fallback='forbid'/> <topology sockets='1' cores='2' threads='1'/> </cpu>", "<!--original feature set--> <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='disable' name='lahf_lm'/> </cpu> <!--changed feature set--> <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='enable' name='lahf_lm'/> </cpu>", "<cpu> <numa> <cell cpus='0-3' memory='512000'/> <cell cpus='4-7' memory='512000'/> </numa> </cpu>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-CPU_model_and_topology
Chapter 1. Red Hat JBoss Web Server 6.0 Service Pack 5
Chapter 1. Red Hat JBoss Web Server 6.0 Service Pack 5 Welcome to the Red Hat JBoss Web Server version 6.0 Service Pack 5 release. Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It consists of an application server (Apache Tomcat servlet container) and the Apache Tomcat Native Library. JBoss Web Server includes the following key components: Apache Tomcat is a servlet container in accordance with the Java Servlet Specification. JBoss Web Server contains Apache Tomcat 10.1. The Apache Tomcat Native Library improves Tomcat scalability, performance, and integration with native server technologies. Tomcat-vault is an extension for the JBoss Web Server that is used for securely storing passwords and other sensitive information used by a JBoss Web Server. The mod_cluster library enables communication between JBoss Web Server and the Apache HTTP Server mod_proxy_cluster module. The mod_cluster library enables you to use the Apache HTTP Server as a load balancer for JBoss Web Server. For more information about configuring mod_cluster , or for information about installing and configuring alternative load balancers such as mod_jk and mod_proxy , see the Apache HTTP Server Connectors and Load Balancing Guide . Apache portable runtime (APR) is a runtime that provides an OpenSSL-based TLS implementation for the HTTP connectors. JBoss Web Server provides a distribution of APR for supported Windows platforms only. For Red Hat Enterprise Linux, you can use the APR package that the operating system provides. OpenSSL is a software library that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols and includes a basic cryptographic library. JBoss Web Server provides a distribution of OpenSSL for supported Windows platforms only. For Red Hat Enterprise Linux, you can use the OpenSSL package that the operating system provides. This release of JBoss Web Server includes a moderate security update. This release of JBoss Web Server provides OpenShift images based on Red Hat Enterprise Linux 8. Note Red Hat distributes the JBoss Web Server 6.0 Service Pack 5 release in RPM packages only. This service pack release is not available in archive file format.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_5_release_notes/red_hat_jboss_web_server_6_0_service_pack_5
Chapter 3. Using SAML to secure applications and services
Chapter 3. Using SAML to secure applications and services This section describes how you can secure applications and services with SAML using either Red Hat Single Sign-On client adapters or generic SAML provider libraries. 3.1. Java adapters Red Hat Single Sign-On comes with a range of different adapters for Java application. Selecting the correct adapter depends on the target platform. 3.1.1. General Adapter Config Each SAML client adapter supported by Red Hat Single Sign-On can be configured by a simple XML text file. This is what one might look like: <keycloak-saml-adapter xmlns="urn:keycloak:saml:adapter" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:keycloak:saml:adapter https://www.keycloak.org/schema/keycloak_saml_adapter_1_10.xsd"> <SP entityID="http://localhost:8081/sales-post-sig/" sslPolicy="EXTERNAL" nameIDPolicyFormat="urn:oasis:names:tc:SAML:1.1:nameid-unspecified" logoutPage="/logout.jsp" forceAuthentication="false" isPassive="false" turnOffChangeSessionIdOnLogin="false" autodetectBearerOnly="false"> <Keys> <Key signing="true" > <KeyStore resource="/WEB-INF/keystore.jks" password="store123"> <PrivateKey alias="http://localhost:8080/sales-post-sig/" password="test123"/> <Certificate alias="http://localhost:8080/sales-post-sig/"/> </KeyStore> </Key> </Keys> <PrincipalNameMapping policy="FROM_NAME_ID"/> <RoleIdentifiers> <Attribute name="Role"/> </RoleIdentifiers> <RoleMappingsProvider id="properties-based-role-mapper"> <Property name="properties.resource.location" value="/WEB-INF/role-mappings.properties"/> </RoleMappingsProvider> <IDP entityID="idp" signaturesRequired="true"> <SingleSignOnService requestBinding="POST" bindingUrl="http://localhost:8081/auth/realms/demo/protocol/saml" /> <SingleLogoutService requestBinding="POST" responseBinding="POST" postBindingUrl="http://localhost:8081/auth/realms/demo/protocol/saml" redirectBindingUrl="http://localhost:8081/auth/realms/demo/protocol/saml" /> <Keys> <Key signing="true"> <KeyStore resource="/WEB-INF/keystore.jks" password="store123"> <Certificate alias="demo"/> </KeyStore> </Key> </Keys> </IDP> </SP> </keycloak-saml-adapter> Some of these configuration switches may be adapter specific and some are common across all adapters. For Java adapters you can use USD{... } enclosure as System property replacement. For example USD{jboss.server.config.dir} . 3.1.1.1. SP element Here is the explanation of the SP element attributes: <SP entityID="sp" sslPolicy="ssl" nameIDPolicyFormat="format" forceAuthentication="true" isPassive="false" keepDOMAssertion="true" autodetectBearerOnly="false"> ... </SP> entityID This is the identifier for this client. The IdP needs this value to determine who the client is that is communicating with it. This setting is REQUIRED . sslPolicy This is the SSL policy the adapter will enforce. Valid values are: ALL , EXTERNAL , and NONE . For ALL , all requests must come in via HTTPS. For EXTERNAL , only non-private IP addresses must come over the wire via HTTPS. For NONE , no requests are required to come over via HTTPS. This setting is OPTIONAL . Default value is EXTERNAL . nameIDPolicyFormat SAML clients can request a specific NameID Subject format. Fill in this value if you want a specific format. It must be a standard SAML format identifier: urn:oasis:names:tc:SAML:2.0:nameid-transient . This setting is OPTIONAL . By default, no special format is requested. forceAuthentication SAML clients can request that a user is re-authenticated even if they are already logged in at the IdP. Set this to true to enable. This setting is OPTIONAL . Default value is false . isPassive SAML clients can request that a user is never asked to authenticate even if they are not logged in at the IdP. Set this to true if you want this. Do not use together with forceAuthentication as they are opposite. This setting is OPTIONAL . Default value is false . turnOffChangeSessionIdOnLogin The session ID is changed by default on a successful login on some platforms to plug a security attack vector. Change this to true to disable this. It is recommended you do not turn it off. Default value is false . autodetectBearerOnly This should be set to true if your application serves both a web application and web services (for example SOAP or REST). It allows you to redirect unauthenticated users of the web application to the Red Hat Single Sign-On login page, but send an HTTP 401 status code to unauthenticated SOAP or REST clients instead as they would not understand a redirect to the login page. Red Hat Single Sign-On auto-detects SOAP or REST clients based on typical headers like X-Requested-With , SOAPAction or Accept . The default value is false . logoutPage This sets the page to display after logout. If the page is a full URL, such as http://web.example.com/logout.html , the user is redirected after logout to that page using the HTTP 302 status code. If a link without scheme part is specified, such as /logout.jsp , the page is displayed after logout, regardless of whether it lies in a protected area according to security-constraint declarations in web.xml , and the page is resolved relative to the deployment context root. keepDOMAssertion This attribute should be set to true to make the adapter store the DOM representation of the assertion in its original form inside the SamlPrincipal associated to the request. The assertion document can be retrieved using the method getAssertionDocument inside the principal. This is specially useful when re-playing a signed assertion. The returned document is the one that was generated parsing the SAML response received by the Red Hat Single Sign-On server. This setting is OPTIONAL and its default value is false (the document is not saved inside the principal). 3.1.1.2. Service Provider keys and key elements If the IdP requires that the client application (or SP) sign all of its requests and/or if the IdP will encrypt assertions, you must define the keys used to do this. For client-signed documents you must define both the private and public key or certificate that is used to sign documents. For encryption, you only have to define the private key that is used to decrypt it. There are two ways to describe your keys. They can be stored within a Java KeyStore or you can copy/paste the keys directly within keycloak-saml.xml in the PEM format. <Keys> <Key signing="true" > ... </Key> </Keys> The Key element has two optional attributes signing and encryption . When set to true these tell the adapter what the key will be used for. If both attributes are set to true, then the key will be used for both signing documents and decrypting encrypted assertions. You must set at least one of these attributes to true. 3.1.1.2.1. KeyStore element Within the Key element you can load your keys and certificates from a Java Keystore. This is declared within a KeyStore element. <Keys> <Key signing="true" > <KeyStore resource="/WEB-INF/keystore.jks" password="store123"> <PrivateKey alias="myPrivate" password="test123"/> <Certificate alias="myCertAlias"/> </KeyStore> </Key> </Keys> Here are the XML config attributes that are defined with the KeyStore element. file File path to the key store. This option is OPTIONAL . The file or resource attribute must be set. resource WAR resource path to the KeyStore. This is a path used in method call to ServletContext.getResourceAsStream(). This option is OPTIONAL . The file or resource attribute must be set. password The password of the KeyStore. This option is REQUIRED . If you are defining keys that the SP will use to sign document, you must also specify references to your private keys and certificates within the Java KeyStore. The PrivateKey and Certificate elements in the above example define an alias that points to the key or cert within the keystore. Keystores require an additional password to access private keys. In the PrivateKey element you must define this password within a password attribute. 3.1.1.2.2. Key PEMS Within the Key element you declare your keys and certificates directly using the sub elements PrivateKeyPem , PublicKeyPem , and CertificatePem . The values contained in these elements must conform to the PEM key format. You usually use this option if you are generating keys using openssl or similar command line tool. <Keys> <Key signing="true"> <PrivateKeyPem> 2341251234AB31234==231BB998311222423522334 </PrivateKeyPem> <CertificatePem> 211111341251234AB31234==231BB998311222423522334 </CertificatePem> </Key> </Keys> 3.1.1.3. SP PrincipalNameMapping element This element is optional. When creating a Java Principal object that you obtain from methods such as HttpServletRequest.getUserPrincipal() , you can define what name is returned by the Principal.getName() method. <SP ...> <PrincipalNameMapping policy="FROM_NAME_ID"/> </SP> <SP ...> <PrincipalNameMapping policy="FROM_ATTRIBUTE" attribute="email" /> </SP> The policy attribute defines the policy used to populate this value. The possible values for this attribute are: FROM_NAME_ID This policy just uses whatever the SAML subject value is. This is the default setting FROM_ATTRIBUTE This will pull the value from one of the attributes declared in the SAML assertion received from the server. You'll need to specify the name of the SAML assertion attribute to use within the attribute XML attribute. 3.1.1.4. RoleIdentifiers element The RoleIdentifiers element defines what SAML attributes within the assertion received from the user should be used as role identifiers within the Jakarta EE Security Context for the user. <RoleIdentifiers> <Attribute name="Role"/> <Attribute name="member"/> <Attribute name="memberOf"/> </RoleIdentifiers> By default Role attribute values are converted to Jakarta EE roles. Some IdPs send roles using a member or memberOf attribute assertion. You can define one or more Attribute elements to specify which SAML attributes must be converted into roles. 3.1.1.5. RoleMappingsProvider element The RoleMappingsProvider is an optional element that allows for the specification of the id and configuration of the org.keycloak.adapters.saml.RoleMappingsProvider SPI implementation that is to be used by the SAML adapter. When Red Hat Single Sign-On is used as the IDP, it is possible to use the built in role mappers to map any roles before adding them to the SAML assertion. However, the SAML adapters can be used to send SAML requests to third party IDPs and in this case it might be necessary to map the roles extracted from the assertion into a different set of roles as required by the SP. The RoleMappingsProvider SPI allows for the configuration of pluggable role mappers that can be used to perform the necessary mappings. The configuration of the provider looks as follows: ... <RoleIdentifiers> ... </RoleIdentifiers> <RoleMappingsProvider id="properties-based-role-mapper"> <Property name="properties.resource.location" value="/WEB-INF/role-mappings.properties"/> </RoleMappingsProvider> <IDP> ... </IDP> The id attribute identifies which of the installed providers is to be used. The Property sub-element can be used multiple times to specify configuration properties for the provider. 3.1.1.5.1. Properties Based role mappings provider Red Hat Single Sign-On includes a RoleMappingsProvider implementation that performs the role mappings using a properties file. This provider is identified by the id properties-based-role-mapper and is implemented by the org.keycloak.adapters.saml.PropertiesBasedRoleMapper class. This provider relies on two configuration properties that can be used to specify the location of the properties file that will be used. First, it checks if the properties.file.location property has been specified, using the configured value to locate the properties file in the filesystem. If the configured file is not located, the provider throws a RuntimeException . The following snippet shows an example of provider using the properties.file.configuration option to load the roles.properties file from the /opt/mappers/ directory in the filesystem: <RoleMappingsProvider id="properties-based-role-mapper"> <Property name="properties.file.location" value="/opt/mappers/roles.properties"/> </RoleMappingsProvider> If the properties.file.location configuration has not been set, the provider checks the properties.resource.location property, using the configured value to load the properties file from the WAR resource. If this configuration property is also not present, the provider attempts to load the file from /WEB-INF/role-mappings.properties by default. Failure to load the file from the resource will result in the provider throwing a RuntimeException . The following snippet shows an example of provider using the properties.resource.location to load the roles.properties file from the application's /WEB-INF/conf/ directory: <RoleMappingsProvider id="properties-based-role-mapper"> <Property name="properties.resource.location" value="/WEB-INF/conf/roles.properties"/> </RoleMappingsProvider> The properties file can contain both roles and principals as keys, and a list of zero or more roles separated by comma as values. When invoked, the implementation iterates through the set of roles that were extracted from the assertion and checks, for each role, if a mapping exists. If the role maps to an empty role, it is discarded. If it maps to a set of one ore more different roles, then these roles are set in the result set. If no mapping is found for the role then it is included as is in the result set. Once the roles have been processed, the implementation checks if the principal extracted from the assertion contains an entry properties file. If a mapping for the principal exists, any roles listed as value are added to the result set. This allows the assignment of extra roles to a principal. As an example, let's assume the provider has been configured with the following properties file: If the principal kc_user is extracted from the assertion with roles roleA , roleB and roleC , the final set of roles assigned to the principal will be roleC , roleX , roleY and roleZ because roleA is being mapped into both roleX and roleY , roleB was mapped into an empty role - thus being discarded, roleC is used as is and finally an additional role was added to the kc_user principal ( roleZ ). Note: to use spaces in role names for mappings, use unicode replacements for space. For example, incoming 'role A' would appear as: 3.1.1.5.2. Adding your own role mappings provider To add a custom role mappings provider one simply needs to implement the org.keycloak.adapters.saml.RoleMappingsProvider SPI. For more details see the SAML Role Mappings SPI section in Server Developer Guide . 3.1.1.6. IDP Element Everything in the IDP element describes the settings for the identity provider (authentication server) the SP is communicating with. <IDP entityID="idp" signaturesRequired="true" signatureAlgorithm="RSA_SHA1" signatureCanonicalizationMethod="http://www.w3.org/2001/10/xml-exc-c14n#"> ... </IDP> Here are the attribute config options you can specify within the IDP element declaration. entityID This is the issuer ID of the IDP. This setting is REQUIRED . signaturesRequired If set to true , the client adapter will sign every document it sends to the IDP. Also, the client will expect that the IDP will be signing any documents sent to it. This switch sets the default for all request and response types, but you will see later that you have some fine grain control over this. This setting is OPTIONAL and will default to false . signatureAlgorithm This is the signature algorithm that the IDP expects signed documents to use. Allowed values are: RSA_SHA1 , RSA_SHA256 , RSA_SHA512 , and DSA_SHA1 . This setting is OPTIONAL and defaults to RSA_SHA256 . signatureCanonicalizationMethod This is the signature canonicalization method that the IDP expects signed documents to use. This setting is OPTIONAL . The default value is http://www.w3.org/2001/10/xml-exc-c14n# and should be good for most IDPs. metadataUrl The URL used to retrieve the IDP metadata, currently this is only used to pick up signing and encryption keys periodically which allow cycling of these keys on the IDP without manual changes on the SP side. 3.1.1.7. IDP AllowedClockSkew sub element The AllowedClockSkew optional sub element defines the allowed clock skew between IDP and SP. The default value is 0. <AllowedClockSkew unit="MILLISECONDS">3500</AllowedClockSkew> unit It is possible to define the time unit attached to the value for this element. Allowed values are MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS and SECONDS. This is OPTIONAL . The default value is SECONDS . 3.1.1.8. IDP SingleSignOnService sub element The SingleSignOnService sub element defines the login SAML endpoint of the IDP. The client adapter will send requests to the IDP formatted via the settings within this element when it wants to login. <SingleSignOnService signRequest="true" validateResponseSignature="true" requestBinding="post" bindingUrl="url"/> Here are the config attributes you can define on this element: signRequest Should the client sign authn requests? This setting is OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. validateResponseSignature Should the client expect the IDP to sign the assertion response document sent back from an auhtn request? This setting OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. requestBinding This is the SAML binding type used for communicating with the IDP. This setting is OPTIONAL . The default value is POST , but you can set it to REDIRECT as well. responseBinding SAML allows the client to request what binding type it wants authn responses to use. The values of this can be POST or REDIRECT . This setting is OPTIONAL . The default is that the client will not request a specific binding type for responses. assertionConsumerServiceUrl URL of the assertion consumer service (ACS) where the IDP login service should send responses to. This setting is OPTIONAL . By default it is unset, relying on the configuration in the IdP. When set, it must end in /saml , for example http://sp.domain.com/my/endpoint/for/saml . The value of this property is sent in AssertionConsumerServiceURL attribute of SAML AuthnRequest message. This property is typically accompanied by the responseBinding attribute. bindingUrl This is the URL for the IDP login service that the client will send requests to. This setting is REQUIRED . 3.1.1.9. IDP SingleLogoutService sub element The SingleLogoutService sub element defines the logout SAML endpoint of the IDP. The client adapter will send requests to the IDP formatted via the settings within this element when it wants to logout. <SingleLogoutService validateRequestSignature="true" validateResponseSignature="true" signRequest="true" signResponse="true" requestBinding="redirect" responseBinding="post" postBindingUrl="posturl" redirectBindingUrl="redirecturl"> signRequest Should the client sign logout requests it makes to the IDP? This setting is OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. signResponse Should the client sign logout responses it sends to the IDP requests? This setting is OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. validateRequestSignature Should the client expect signed logout request documents from the IDP? This setting is OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. validateResponseSignature Should the client expect signed logout response documents from the IDP? This setting is OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. requestBinding This is the SAML binding type used for communicating SAML requests to the IDP. This setting is OPTIONAL . The default value is POST , but you can set it to REDIRECT as well. responseBinding This is the SAML binding type used for communicating SAML responses to the IDP. The values of this can be POST or REDIRECT . This setting is OPTIONAL . The default value is POST , but you can set it to REDIRECT as well. postBindingUrl This is the URL for the IDP's logout service when using the POST binding. This setting is REQUIRED if using the POST binding. redirectBindingUrl This is the URL for the IDP's logout service when using the REDIRECT binding. This setting is REQUIRED if using the REDIRECT binding. 3.1.1.10. IDP Keys sub element The Keys sub element of IDP is only used to define the certificate or public key to use to verify documents signed by the IDP. It is defined in the same way as the SP's Keys element . But again, you only have to define one certificate or public key reference. Note that, if both IDP and SP are realized by Red Hat Single Sign-On server and adapter, respectively, there is no need to specify the keys for signature validation, see below. It is possible to configure SP to obtain public keys for IDP signature validation from published certificates automatically, provided both SP and IDP are implemented by Red Hat Single Sign-On. This is done by removing all declarations of signature validation keys in Keys sub element. If the Keys sub element would then remain empty, it can be omitted completely. The keys are then automatically obtained by SP from SAML descriptor, location of which is derived from SAML endpoint URL specified in the IDP SingleSignOnService sub element . Settings of the HTTP client that is used for SAML descriptor retrieval usually needs no additional configuration, however it can be configured in the IDP HttpClient sub element . It is also possible to specify multiple keys for signature verification. This is done by declaring multiple Key elements within Keys sub element that have signing attribute set to true . This is useful for example in situation when the IDP signing keys are rotated: There is usually a transition period when new SAML protocol messages and assertions are signed with the new key but those signed by key should still be accepted. It is not possible to configure Red Hat Single Sign-On to both obtain the keys for signature verification automatically and define additional static signature verification keys. <IDP entityID="idp"> ... <Keys> <Key signing="true"> <KeyStore resource="/WEB-INF/keystore.jks" password="store123"> <Certificate alias="demo"/> </KeyStore> </Key> </Keys> </IDP> 3.1.1.11. IDP HttpClient sub element The HttpClient optional sub element defines the properties of HTTP client used for automatic obtaining of certificates containing public keys for IDP signature verification via SAML descriptor of the IDP when enabled . <HttpClient connectionPoolSize="10" disableTrustManager="false" allowAnyHostname="false" clientKeystore="classpath:keystore.jks" clientKeystorePassword="pwd" truststore="classpath:truststore.jks" truststorePassword="pwd" proxyUrl="http://proxy/" socketTimeout="5000" connectionTimeout="6000" connectionTtl="500" /> connectionPoolSize This config option defines how many connections to the Red Hat Single Sign-On server should be pooled. This is OPTIONAL . The default value is 10 . disableTrustManager If the Red Hat Single Sign-On server requires HTTPS and this config option is set to true you do not have to specify a truststore. This setting should only be used during development and never in production as it will disable verification of SSL certificates. This is OPTIONAL . The default value is false . allowAnyHostname If the Red Hat Single Sign-On server requires HTTPS and this config option is set to true the Red Hat Single Sign-On server's certificate is validated via the truststore, but host name validation is not done. This setting should only be used during development and never in production as it will partly disable verification of SSL certificates. This seting may be useful in test environments. This is OPTIONAL . The default value is false . truststore The value is the file path to a truststore file. If you prefix the path with classpath: , then the truststore will be obtained from the deployment's classpath instead. Used for outgoing HTTPS communications to the Red Hat Single Sign-On server. Client making HTTPS requests need a way to verify the host of the server they are talking to. This is what the trustore does. The keystore contains one or more trusted host certificates or certificate authorities. You can create this truststore by extracting the public certificate of the Red Hat Single Sign-On server's SSL keystore. This is REQUIRED unless disableTrustManager is true . truststorePassword Password for the truststore. This is REQUIRED if truststore is set and the truststore requires a password. clientKeystore This is the file path to a keystore file. This keystore contains client certificate for two-way SSL when the adapter makes HTTPS requests to the Red Hat Single Sign-On server. This is OPTIONAL . clientKeystorePassword Password for the client keystore and for the client's key. This is REQUIRED if clientKeystore is set. proxyUrl URL to HTTP proxy to use for HTTP connections. This is OPTIONAL . socketTimeout Timeout for socket waiting for data after establishing the connection in milliseconds. Maximum time of inactivity between two data packets. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default if applicable). The default value is -1 . This is OPTIONAL . connectionTimeout Timeout for establishing the connection with the remote host in milliseconds. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default if applicable). The default value is -1 . This is OPTIONAL . connectionTtl Connection time-to-live for client in milliseconds. A value less than or equal to zero is interpreted as an infinite value. The default value is -1 . This is OPTIONAL . 3.1.2. JBoss EAP adapter To be able to secure WAR apps deployed on JBoss EAP, you must install and configure the Red Hat Single Sign-On SAML Adapter Subsystem. You then provide a keycloak config, /WEB-INF/keycloak-saml.xml file in your WAR and change the auth-method to KEYCLOAK-SAML within web.xml. You install the adapters by using a ZIP file or an RPM. Installing adapters from a ZIP file Installing JBoss EAP 7 Adapters from an RPM Installing JBoss EAP 6 Adapters from an RPM 3.1.3. Installing adapters from a ZIP file Each adapter is a separate download on the Red Hat Single Sign-On download site. Procedure Install the adapter that applies to your application server from the Downloads site. Install on JBoss EAP 7.x: Install on JBoss EAP 6.x: These ZIP files create new JBoss Modules specific to the JBoss EAP SAML Adapter within your JBoss EAP distribution. Use a CLI script to enable the Red Hat Single Sign-On SAML Subsystem within your app server's server configuration: domain.xml or standalone.xml . Start the server and run the script that applies to your application server. Use this command for JBoss EAP 7.1 or newer Note EAP supports OpenJDK 17 and Oracle JDK 17 since 7.4.CP7 and 7.4.CP8 respectively. Note that the new java version makes the elytron variant compulsory, so do not use the legacy adapter with JDK 17. Also, after running the adapter CLI file, execute the enable-elytron-se17.cli script provided by EAP. Both scripts are necessary to configure the elytron adapter and remove the incompatible EAP subsystems. For more details, see this Security Configuration Changes article. Use this command for JBoss EAP 7.0 and EAP 6.4 Note It is possible to use the legacy non-Elytron adapter on JBoss EAP 7.1 or newer as well, meaning you can use adapter-install-saml.cli even on those versions. However, we recommend to use the newer Elytron adapter. The script will add the extension, subsystem, and optional security-domain as described below. <server xmlns="urn:jboss:domain:1.4"> <extensions> <extension module="org.keycloak.keycloak-saml-adapter-subsystem"/> ... </extensions> <profile> <subsystem xmlns="urn:jboss:domain:keycloak-saml:1.1"/> ... </profile> The keycloak security domain should be used with EJBs and other components when you need the security context created in the secured web tier to be propagated to the EJBs (other EE component) you are invoking. Otherwise this configuration is optional. <server xmlns="urn:jboss:domain:1.4"> <subsystem xmlns="urn:jboss:domain:security:1.2"> <security-domains> ... <security-domain name="keycloak"> <authentication> <login-module code="org.keycloak.adapters.jboss.KeycloakLoginModule" flag="required"/> </authentication> </security-domain> </security-domains> The security context is propagated to the EJB tier automatically. 3.1.3.1. JBoss SSO JBoss EAP has built-in support for single sign-on for web applications deployed to the same JBoss EAP instance. This should not be enabled when using Red Hat Single Sign-On. 3.1.3.2. Setting SameSite value for JSESSIONID cookie Browsers are planning to set the default value for the SameSite attribute for cookies to Lax . This setting means that cookies will be sent to applications only if the request originates in the same domain. This behavior can affect the SAML POST binding which may become non-functional. To preserve full functionality of the SAML adapter, we recommend setting the SameSite value to None for the JSESSIONID cookie created by your container. Not doing so may result in resetting the container's session with each request to Red Hat Single Sign-On. Note To avoid setting the SameSite attribute to None , consider switching to the REDIRECT binding if it is acceptable, or to OIDC protocol where this workaround is not necessary. To set the SameSite value to None for the JSESSIONID cookie in Wildfly/EAP, add a file undertow-handlers.conf with the following content to the WEB-INF directory of your application. The support for this configuration is available in Wildfly from version 19.1.0. 3.1.4. Installing JBoss EAP 7 Adapters from an RPM Note With Red Hat Enterprise Linux 7, the term channel was replaced with the term repository. In these instructions only the term repository is used. Prerequisites You must subscribe to the JBoss EAP 7 repository before you can install the EAP 7 adapters from an RPM. Ensure that your Red Hat Enterprise Linux system is registered to your account using Red Hat Subscription Manager. For more information see the Red Hat Subscription Management documentation . If you are already subscribed to another JBoss EAP repository, you must unsubscribe from that repository first. For Red Hat Enterprise Linux 6, 7: Using Red Hat Subscription Manager, subscribe to the JBoss EAP 7.4 repository using the following command. Replace <RHEL_VERSION> with either 6 or 7 depending on your Red Hat Enterprise Linux version. USD sudo subscription-manager repos --enable=jb-eap-7-for-rhel-<RHEL_VERSION>-server-rpms For Red Hat Enterprise Linux 8: Using Red Hat Subscription Manager, subscribe to the JBoss EAP 7.4 repository using the following command: USD sudo subscription-manager repos --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms Procedure Install the EAP 7 adapters for SAML based on your version of Red Hat Enterprise Linux. Install on Red Hat Linux 7: USD sudo yum install eap7-keycloak-saml-adapter-sso7_6 Install on Red Hat Enterprise Linux 8: USD sudo dnf install eap7-keycloak-adapter-sso7_6 Note The default EAP_HOME path for the RPM installation is /opt/rh/eap7/root/usr/share/wildfly. Run the installation script for the SAML module: USD USDEAP_HOME/bin/jboss-cli.sh -c --file=USDEAP_HOME/bin/adapter-install-saml.cli Your installation is complete. 3.1.5. Installing JBoss EAP 6 Adapters from an RPM Note With Red Hat Enterprise Linux 7, the term channel was replaced with the term repository. In these instructions only the term repository is used. Prerequisites You must subscribe to the JBoss EAP 6 repository before you can install the EAP 6 adapters from an RPM. Ensure that your Red Hat Enterprise Linux system is registered to your account using Red Hat Subscription Manager. For more information see the Red Hat Subscription Management documentation . If you are already subscribed to another JBoss EAP repository, you must unsubscribe from that repository first. Using Red Hat Subscription Manager, subscribe to the JBoss EAP 6 repository using the following command. Replace <RHEL_VERSION> with either 6 or 7 depending on your Red Hat Enterprise Linux version. USD sudo subscription-manager repos --enable=jb-eap-6-for-rhel-<RHEL_VERSION>-server-rpms Procedure Install the EAP 6 adapters for SAML using the following command: USD sudo yum install keycloak-saml-adapter-sso7_6-eap6 Note The default EAP_HOME path for the RPM installation is /opt/rh/eap6/root/usr/share/wildfly. Run the installation script for the SAML module: USD USDEAP_HOME/bin/jboss-cli.sh -c --file=USDEAP_HOME/bin/adapter-install-saml.cli Your installation is complete. 3.1.5.1. Securing a WAR This section describes how to secure a WAR directly by adding config and editing files within your WAR package. The first thing you must do is create a keycloak-saml.xml adapter config file within the WEB-INF directory of your WAR. The format of this config file is described in the General Adapter Config section. you must set the auth-method to KEYCLOAK-SAML in web.xml . You also have to use standard servlet security to specify role-base constraints on your URLs. Here's an example web.xml file: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0"> <module-name>customer-portal</module-name> <security-constraint> <web-resource-collection> <web-resource-name>Admins</web-resource-name> <url-pattern>/admin/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>Customers</web-resource-name> <url-pattern>/customers/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK-SAML</auth-method> <realm-name>this is ignored currently</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> <security-role> <role-name>user</role-name> </security-role> </web-app> All standard servlet settings except the auth-method setting. 3.1.5.2. Securing WARs using the Red Hat Single Sign-On SAML Subsystem You do not have to open a WAR to secure it with Red Hat Single Sign-On. Alternatively, you can externally secure it via the Red Hat Single Sign-On SAML Adapter Subsystem. While you don't have to specify KEYCLOAK-SAML as an auth-method , you still have to define the security-constraints in web.xml . You do not, however, have to create a WEB-INF/keycloak-saml.xml file. This metadata is instead defined within the XML in your server's domain.xml or standalone.xml subsystem configuration section. <extensions> <extension module="org.keycloak.keycloak-saml-adapter-subsystem"/> </extensions> <profile> <subsystem xmlns="urn:jboss:domain:keycloak-saml:1.1"> <secure-deployment name="WAR MODULE NAME.war"> <SP entityID="APPLICATION URL"> ... </SP> </secure-deployment> </subsystem> </profile> The secure-deployment name attribute identifies the WAR you want to secure. Its value is the module-name defined in web.xml with .war appended. The rest of the configuration uses the same XML syntax as keycloak-saml.xml configuration defined in General Adapter Config . An example configuration: <subsystem xmlns="urn:jboss:domain:keycloak-saml:1.1"> <secure-deployment name="saml-post-encryption.war"> <SP entityID="http://localhost:8080/sales-post-enc/" sslPolicy="EXTERNAL" nameIDPolicyFormat="urn:oasis:names:tc:SAML:1.1:nameid-unspecified" logoutPage="/logout.jsp" forceAuthentication="false"> <Keys> <Key signing="true" encryption="true"> <KeyStore resource="/WEB-INF/keystore.jks" password="store123"> <PrivateKey alias="http://localhost:8080/sales-post-enc/" password="test123"/> <Certificate alias="http://localhost:8080/sales-post-enc/"/> </KeyStore> </Key> </Keys> <PrincipalNameMapping policy="FROM_NAME_ID"/> <RoleIdentifiers> <Attribute name="Role"/> </RoleIdentifiers> <IDP entityID="idp"> <SingleSignOnService signRequest="true" validateResponseSignature="true" requestBinding="POST" bindingUrl="http://localhost:8080/auth/realms/saml-demo/protocol/saml"/> <SingleLogoutService validateRequestSignature="true" validateResponseSignature="true" signRequest="true" signResponse="true" requestBinding="POST" responseBinding="POST" postBindingUrl="http://localhost:8080/auth/realms/saml-demo/protocol/saml" redirectBindingUrl="http://localhost:8080/auth/realms/saml-demo/protocol/saml"/> <Keys> <Key signing="true" > <KeyStore resource="/WEB-INF/keystore.jks" password="store123"> <Certificate alias="saml-demo"/> </KeyStore> </Key> </Keys> </IDP> </SP> </secure-deployment> </subsystem> 3.1.6. Java Servlet filter adapter If you want to use SAML with a Java servlet application that doesn't have an adapter for that servlet platform, you can opt to use the servlet filter adapter that Red Hat Single Sign-On has. This adapter works a little differently than the other adapters. You still have to specify a /WEB-INF/keycloak-saml.xml file as defined in the General Adapter Config section, but you do not define security constraints in web.xml . Instead you define a filter mapping using the Red Hat Single Sign-On servlet filter adapter to secure the url patterns you want to secure. Note Backchannel logout works a bit differently than the standard adapters. Instead of invalidating the http session it instead marks the session ID as logged out. There's just no way of arbitrarily invalidating an http session based on a session ID. Warning Backchannel logout does not currently work when you have a clustered application that uses the SAML filter. <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0"> <module-name>customer-portal</module-name> <filter> <filter-name>Keycloak Filter</filter-name> <filter-class>org.keycloak.adapters.saml.servlet.SamlFilter</filter-class> </filter> <filter-mapping> <filter-name>Keycloak Filter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> </web-app> The Red Hat Single Sign-On filter has the same configuration parameters available as the other adapters except you must define them as filter init params instead of context params. You can define multiple filter mappings if you have various different secure and unsecure url patterns. Warning You must have a filter mapping that covers /saml . This mapping covers all server callbacks. When registering SPs with an IdP, you must register http[s]://hostname/{context-root}/saml as your Assert Consumer Service URL and Single Logout Service URL. To use this filter, include this maven artifact in your WAR poms: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-saml-servlet-filter-adapter</artifactId> <version>18.0.18.redhat-00001</version> </dependency> In order to use Multi Tenancy the keycloak.config.resolver parameter should be passed as a filter parameter. <filter> <filter-name>Keycloak Filter</filter-name> <filter-class>org.keycloak.adapters.saml.servlet.SamlFilter</filter-class> <init-param> <param-name>keycloak.config.resolver</param-name> <param-value>example.SamlMultiTenantResolver</param-value> </init-param> </filter> 3.1.7. Registering with an Identity Provider For each servlet-based adapter, the endpoint you register for the assert consumer service URL and single logout service must be the base URL of your servlet application with /saml appended to it, that is, https://example.com/contextPath/saml . 3.1.8. Logout There are multiple ways you can logout from a web application. For Jakarta EE servlet containers, you can call HttpServletRequest.logout() . For any other browser application, you can point the browser at any url of your web application that has a security constraint and pass in a query parameter GLO, i.e. http://myapp?GLO=true . This will log you out if you have an SSO session with your browser. 3.1.8.1. Logout in clustered environment Internally, the SAML adapter stores a mapping between the SAML session index, principal name (when known), and HTTP session ID. This mapping can be maintained in JBoss application server family (WildFly 10/11, EAP 6/7) across cluster for distributable applications. As a precondition, the HTTP sessions need to be distributed across cluster (i.e. application is marked with <distributable/> tag in application's web.xml ). To enable the functionality, add the following section to your /WEB_INF/web.xml file: For EAP 7, WildFly 10/11: <context-param> <param-name>keycloak.sessionIdMapperUpdater.classes</param-name> <param-value>org.keycloak.adapters.saml.wildfly.infinispan.InfinispanSessionCacheIdMapperUpdater</param-value> </context-param> For EAP 6: <context-param> <param-name>keycloak.sessionIdMapperUpdater.classes</param-name> <param-value>org.keycloak.adapters.saml.jbossweb.infinispan.InfinispanSessionCacheIdMapperUpdater</param-value> </context-param> If the session cache of the deployment is named deployment-cache , the cache used for SAML mapping will be named as deployment-cache .ssoCache . The name of the cache can be overridden by a context parameter keycloak.sessionIdMapperUpdater.infinispan.cacheName . The cache container containing the cache will be the same as the one containing the deployment session cache, but can be overridden by a context parameter keycloak.sessionIdMapperUpdater.infinispan.containerName . By default, the configuration of the SAML mapping cache will be derived from session cache. The configuration can be manually overridden in cache configuration section of the server just the same as other caches. Currently, to provide reliable service, it is recommended to use replicated cache for the SAML session cache. Using distributed cache may lead to results where the SAML logout request would land to a node with no access to SAML session index to HTTP session mapping which would lead to unsuccessful logout. 3.1.8.2. Logout in cross-site scenario The cross-site scenario only applies to WildFly 10 and higher, and EAP 7 and higher. Special handling is needed for handling sessions that span multiple data centers. Imagine the following scenario: Login requests are handled within cluster in data center 1. Admin issues logout request for a particular SAML session, the request lands in data center 2. The data center 2 has to log out all sessions that are present in data center 1 (and all other data centers that share HTTP sessions). To cover this case, the SAML session cache described above needs to be replicated not only within individual clusters but across all the data centers for example via standalone Infinispan/JDG server : A cache has to be added to the standalone Infinispan/JDG server. The cache from item has to be added as a remote store for the respective SAML session cache. Once remote store is found to be present on SAML session cache during deployment, it is watched for changes and the local SAML session cache is updated accordingly. 3.1.9. Obtaining assertion attributes After a successful SAML login, your application code may want to obtain attribute values passed with the SAML assertion. HttpServletRequest.getUserPrincipal() returns a Principal object that you can typecast into a Red Hat Single Sign-On specific class called org.keycloak.adapters.saml.SamlPrincipal . This object allows you to look at the raw assertion and also has convenience functions to look up attribute values. package org.keycloak.adapters.saml; public class SamlPrincipal implements Serializable, Principal { /** * Get full saml assertion * * @return */ public AssertionType getAssertion() { ... } /** * Get SAML subject sent in assertion * * @return */ public String getSamlSubject() { ... } /** * Subject nameID format * * @return */ public String getNameIDFormat() { ... } @Override public String getName() { ... } /** * Convenience function that gets Attribute value by attribute name * * @param name * @return */ public List<String> getAttributes(String name) { ... } /** * Convenience function that gets Attribute value by attribute friendly name * * @param friendlyName * @return */ public List<String> getFriendlyAttributes(String friendlyName) { ... } /** * Convenience function that gets first value of an attribute by attribute name * * @param name * @return */ public String getAttribute(String name) { ... } /** * Convenience function that gets first value of an attribute by attribute name * * * @param friendlyName * @return */ public String getFriendlyAttribute(String friendlyName) { ... } /** * Get set of all assertion attribute names * * @return */ public Set<String> getAttributeNames() { ... } /** * Get set of all assertion friendly attribute names * * @return */ public Set<String> getFriendlyNames() { ... } } 3.1.10. Error Handling Red Hat Single Sign-On has some error handling facilities for servlet based client adapters. When an error is encountered in authentication, the client adapter will call HttpServletResponse.sendError() . You can set up an error-page within your web.xml file to handle the error however you want. The client adapter can throw 400, 401, 403, and 500 errors. <error-page> <error-code>403</error-code> <location>/ErrorHandler</location> </error-page> The client adapter also sets an HttpServletRequest attribute that you can retrieve. The attribute name is org.keycloak.adapters.spi.AuthenticationError . Typecast this object to: org.keycloak.adapters.saml.SamlAuthenticationError . This class can tell you exactly what happened. If this attribute is not set, then the adapter was not responsible for the error code. public class SamlAuthenticationError implements AuthenticationError { public static enum Reason { EXTRACTION_FAILURE, INVALID_SIGNATURE, ERROR_STATUS } public Reason getReason() { return reason; } public StatusResponseType getStatus() { return status; } } 3.1.11. Troubleshooting The best way to troubleshoot problems is to turn on debugging for SAML in both the client adapter and Red Hat Single Sign-On Server. Using your logging framework, set the log level to DEBUG for the org.keycloak.saml package. Turning this on allows you to see the SAML requests and response documents being sent to and from the server. 3.1.12. Multi Tenancy SAML offers the same functionality as OIDC for Multi Tenancy , meaning that a single target application (WAR) can be secured with multiple Red Hat Single Sign-On realms. The realms can be located on the same Red Hat Single Sign-On instance or on different instances. To do this, the application must have multiple keycloak-saml.xml adapter configuration files. While you could have multiple instances of your WAR with different adapter configuration files deployed to different context-paths, this may be inconvenient and you may also want to select the realm based on something other than context-path. Red Hat Single Sign-On makes it possible to have a custom config resolver, so you can choose which adapter config is used for each request. In SAML, the configuration is only interesting in the login processing; once the user is logged in, the session is authenticated and it does not matter if the keycloak-saml.xml returned is different. For that reason, returning the same configuration for the same session is the correct way to go. To achieve this, create an implementation of org.keycloak.adapters.saml.SamlConfigResolver . The following example uses the Host header to locate the proper configuration and load it and the associated elements from the applications's Java classpath: package example; import java.io.InputStream; import org.keycloak.adapters.saml.SamlConfigResolver; import org.keycloak.adapters.saml.SamlDeployment; import org.keycloak.adapters.saml.config.parsers.DeploymentBuilder; import org.keycloak.adapters.saml.config.parsers.ResourceLoader; import org.keycloak.adapters.spi.HttpFacade; import org.keycloak.saml.common.exceptions.ParsingException; public class SamlMultiTenantResolver implements SamlConfigResolver { @Override public SamlDeployment resolve(HttpFacade.Request request) { String host = request.getHeader("Host"); String realm = null; if (host.contains("tenant1")) { realm = "tenant1"; } else if (host.contains("tenant2")) { realm = "tenant2"; } else { throw new IllegalStateException("Not able to guess the keycloak-saml.xml to load"); } InputStream is = getClass().getResourceAsStream("/" + realm + "-keycloak-saml.xml"); if (is == null) { throw new IllegalStateException("Not able to find the file /" + realm + "-keycloak-saml.xml"); } ResourceLoader loader = new ResourceLoader() { @Override public InputStream getResourceAsStream(String path) { return getClass().getResourceAsStream(path); } }; try { return new DeploymentBuilder().build(is, loader); } catch (ParsingException e) { throw new IllegalStateException("Cannot load SAML deployment", e); } } } You must also configure which SamlConfigResolver implementation to use with the keycloak.config.resolver context-param in your web.xml : <web-app> ... <context-param> <param-name>keycloak.config.resolver</param-name> <param-value>example.SamlMultiTenantResolver</param-value> </context-param> </web-app> 3.2. mod_auth_mellon Apache HTTPD Module The mod_auth_mellon module is an Apache HTTPD plugin for SAML. If your language/environment supports using Apache HTTPD as a proxy, then you can use mod_auth_mellon to secure your web application with SAML. For more details on this module see the mod_auth_mellon GitHub repo. To configure mod_auth_mellon you need: An Identity Provider (IdP) entity descriptor XML file, which describes the connection to Red Hat Single Sign-On or another SAML IdP An SP entity descriptor XML file, which describes the SAML connections and configuration for the application you are securing. A private key PEM file, which is a text file in the PEM format that defines the private key the application uses to sign documents. A certificate PEM file, which is a text file that defines the certificate for your application. mod_auth_mellon-specific Apache HTTPD module configuration. 3.2.1. Configuring mod_auth_mellon with Red Hat Single Sign-On There are two hosts involved: The host on which Red Hat Single Sign-On is running, which will be referred to as USDidp_host because Red Hat Single Sign-On is a SAML identity provider (IdP). The host on which the web application is running, which will be referred to as USDsp_host. In SAML an application using an IdP is called a service provider (SP). All of the following steps need to performed on USDsp_host with root privileges. 3.2.1.1. Installing the packages To install the necessary packages, you will need: Apache Web Server (httpd) Mellon SAML SP add-on module for Apache Tools to create X509 certificates To install the necessary packages, run this command: 3.2.1.2. Creating a configuration directory for Apache SAML It is advisable to keep configuration files related to Apache's use of SAML in one location. Create a new directory named saml2 located under the Apache configuration root /etc/httpd: 3.2.1.3. Configuring the Mellon Service Provider Configuration files for Apache add-on modules are located in the /etc/httpd/conf.d directory and have a file name extension of .conf. You need to create the /etc/httpd/conf.d/mellon.conf file and place Mellon's configuration directives in it. Mellon's configuration directives can roughly be broken down into two classes of information: Which URLs to protect with SAML authentication What SAML parameters will be used when a protected URL is referenced. Apache configuration directives typically follow a hierarchical tree structure in the URL space, which are known as locations. You need to specify one or more URL locations for Mellon to protect. You have flexibility in how you add the configuration parameters that apply to each location. You can either add all the necessary parameters to the location block or you can add Mellon parameters to a common location high up in the URL location hierarchy that specific protected locations inherit (or some combination of the two). Since it is common for an SP to operate in the same way no matter which location triggers SAML actions, the example configuration used here places common Mellon configuration directives in the root of the hierarchy and then specific locations to be protected by Mellon can be defined with minimal directives. This strategy avoids duplicating the same parameters for each protected location. This example has just one protected location: https://USDsp_host/private. To configure the Mellon service provider, perform the following procedure. Procedure Create the file /etc/httpd/conf.d/mellon.conf with this content: <Location / > MellonEnable info MellonEndpointPath /mellon/ MellonSPMetadataFile /etc/httpd/saml2/mellon_metadata.xml MellonSPPrivateKeyFile /etc/httpd/saml2/mellon.key MellonSPCertFile /etc/httpd/saml2/mellon.crt MellonIdPMetadataFile /etc/httpd/saml2/idp_metadata.xml </Location> <Location /private > AuthType Mellon MellonEnable auth Require valid-user </Location> Note Some of the files referenced in the code above are created in later steps. 3.2.2. Setting the SameSite value for the cookie used by mod_auth_mellon Browsers are planning to set the default value for the SameSite attribute for cookies to Lax . This setting means that cookies will be sent to applications only if the request originates in the same domain. This behavior can affect the SAML POST binding which may become non-functional. To preserve full functionality of the mod_auth_mellon module, we recommend setting the SameSite value to None for the cookie created by mod_auth_mellon . Not doing so may result in an inability to login using Red Hat Single Sign-On. To set the SameSite value to None , add the following configuration to <Location / > tag within your mellon.conf file. MellonSecureCookie On MellonCookieSameSite none The support for this configuration is available in the mod_auth_mellon module from version 0.16.0. 3.2.2.1. Creating the Service Provider metadata In SAML IdPs and SPs exchange SAML metadata, which is in XML format. The schema for the metadata is a standard, thus assuring participating SAML entities can consume each other's metadata. You need: Metadata for the IdP that the SP utilizes Metadata describing the SP provided to the IdP One of the components of SAML metadata is X509 certificates. These certificates are used for two purposes: Sign SAML messages so the receiving end can prove the message originated from the expected party. Encrypt the message during transport (seldom used because SAML messages typically occur on TLS-protected transports) You can use your own certificates if you already have a Certificate Authority (CA) or you can generate a self-signed certificate. For simplicity in this example a self-signed certificate is used. Because Mellon's SP metadata must reflect the capabilities of the installed version of mod_auth_mellon, must be valid SP metadata XML, and must contain an X509 certificate (whose creation can be obtuse unless you are familiar with X509 certificate generation) the most expedient way to produce the SP metadata is to use a tool included in the mod_auth_mellon package (mellon_create_metadata.sh). The generated metadata can always be edited later because it is a text file. The tool also creates your X509 key and certificate. SAML IdPs and SPs identify themselves using a unique name known as an EntityID. To use the Mellon metadata creation tool you need: The EntityID, which is typically the URL of the SP, and often the URL of the SP where the SP metadata can be retrieved The URL where SAML messages for the SP will be consumed, which Mellon calls the MellonEndPointPath. To create the SP metadata, perform the following procedure. Procedure Create a few helper shell variables: Invoke the Mellon metadata creation tool by running this command: Move the generated files to their destination (referenced in the /etc/httpd/conf.d/mellon.conf file created above): 3.2.2.2. Adding the Mellon Service Provider to the Red Hat Single Sign-On Identity Provider Assumption: The Red Hat Single Sign-On IdP has already been installed on the USDidp_host. Red Hat Single Sign-On supports multiple tenancy where all users, clients, and so on are grouped in what is called a realm. Each realm is independent of other realms. You can use an existing realm in your Red Hat Single Sign-On, but this example shows how to create a new realm called test_realm and use that realm. All these operations are performed using the Red Hat Single Sign-On Admin Console. You must have the admin username and password for USDidp_host to perform the following procedure. Procedure Open the Admin Console and log on by entering the admin username and password. After logging into the Admin Console, there will be an existing realm. When Red Hat Single Sign-On is first set up a root realm, master, is created by default. Any previously created realms are listed in the upper left corner of the Admin Console in a drop-down list. From the realm drop-down list select Add realm . In the Name field type test_realm and click Create . 3.2.2.2.1. Adding the Mellon Service Provider as a client of the realm In Red Hat Single Sign-On SAML SPs are known as clients. To add the SP we must be in the Clients section of the realm. Click the Clients menu item on the left and click Create in the upper right corner to create a new client. 3.2.2.2.2. Adding the Mellon SP client To add the Mellon SP client, perform the following procedure. Procedure Set the client protocol to SAML. From the Client Protocol drop down list, select saml . Provide the Mellon SP metadata file created above (/etc/httpd/saml2/mellon_metadata.xml). Depending on where your browser is running you might have to copy the SP metadata from USDsp_host to the machine on which your browser is running so the browser can find the file. Click Save . 3.2.2.2.3. Editing the Mellon SP client Use this procedure to set important client configuration parameters. Procedure Ensure "Force POST Binding" is On. Add paosResponse to the Valid Redirect URIs list: Copy the postResponse URL in "Valid Redirect URIs" and paste it into the empty add text fields just below the "+". Change "postResponse" to "paosResponse". (The paosResponse URL is needed for SAML ECP.) Click Save at the bottom. Many SAML SPs determine authorization based on a user's membership in a group. The Red Hat Single Sign-On IdP can manage user group information but it does not supply the user's groups unless the IdP is configured to supply it as a SAML attribute. Perform the following procedure to configure the IdP to supply the user's groups as as a SAML attribute. Procedure Click the Mappers tab of the client. In the upper right corner of the Mappers page, click Create . From the Mapper Type drop-down list select Group list . Set Name to "group list". Set the SAML attribute name to "groups". Click Save . The remaining steps are performed on USDsp_host. 3.2.2.2.4. Retrieving the Identity Provider metadata Now that you have created the realm on the IdP you need to retrieve the IdP metadata associated with it so the Mellon SP recognizes it. In the /etc/httpd/conf.d/mellon.conf file created previously, the MellonIdPMetadataFile is specified as /etc/httpd/saml2/idp_metadata.xml but until now that file has not existed on USDsp_host. Use this procedure to retrieve that file from the IdP. Procedure Use this command, substituting with the correct value for USDidp_host: Mellon is now fully configured. To run a syntax check for Apache configuration files, use this command: Note Configtest is equivalent to the -t argument to apachectl. If the configuration test shows any errors, correct them before proceeding. Restart the Apache server: You have now set up both Red Hat Single Sign-On as a SAML IdP in the test_realm and mod_auth_mellon as SAML SP protecting the URL USDsp_host/protected (and everything beneath it) by authenticating against the USDidp_host IdP.
[ "<keycloak-saml-adapter xmlns=\"urn:keycloak:saml:adapter\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:keycloak:saml:adapter https://www.keycloak.org/schema/keycloak_saml_adapter_1_10.xsd\"> <SP entityID=\"http://localhost:8081/sales-post-sig/\" sslPolicy=\"EXTERNAL\" nameIDPolicyFormat=\"urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified\" logoutPage=\"/logout.jsp\" forceAuthentication=\"false\" isPassive=\"false\" turnOffChangeSessionIdOnLogin=\"false\" autodetectBearerOnly=\"false\"> <Keys> <Key signing=\"true\" > <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <PrivateKey alias=\"http://localhost:8080/sales-post-sig/\" password=\"test123\"/> <Certificate alias=\"http://localhost:8080/sales-post-sig/\"/> </KeyStore> </Key> </Keys> <PrincipalNameMapping policy=\"FROM_NAME_ID\"/> <RoleIdentifiers> <Attribute name=\"Role\"/> </RoleIdentifiers> <RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/role-mappings.properties\"/> </RoleMappingsProvider> <IDP entityID=\"idp\" signaturesRequired=\"true\"> <SingleSignOnService requestBinding=\"POST\" bindingUrl=\"http://localhost:8081/auth/realms/demo/protocol/saml\" /> <SingleLogoutService requestBinding=\"POST\" responseBinding=\"POST\" postBindingUrl=\"http://localhost:8081/auth/realms/demo/protocol/saml\" redirectBindingUrl=\"http://localhost:8081/auth/realms/demo/protocol/saml\" /> <Keys> <Key signing=\"true\"> <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <Certificate alias=\"demo\"/> </KeyStore> </Key> </Keys> </IDP> </SP> </keycloak-saml-adapter>", "<SP entityID=\"sp\" sslPolicy=\"ssl\" nameIDPolicyFormat=\"format\" forceAuthentication=\"true\" isPassive=\"false\" keepDOMAssertion=\"true\" autodetectBearerOnly=\"false\"> </SP>", "<Keys> <Key signing=\"true\" > </Key> </Keys>", "<Keys> <Key signing=\"true\" > <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <PrivateKey alias=\"myPrivate\" password=\"test123\"/> <Certificate alias=\"myCertAlias\"/> </KeyStore> </Key> </Keys>", "<Keys> <Key signing=\"true\"> <PrivateKeyPem> 2341251234AB31234==231BB998311222423522334 </PrivateKeyPem> <CertificatePem> 211111341251234AB31234==231BB998311222423522334 </CertificatePem> </Key> </Keys>", "<SP ...> <PrincipalNameMapping policy=\"FROM_NAME_ID\"/> </SP> <SP ...> <PrincipalNameMapping policy=\"FROM_ATTRIBUTE\" attribute=\"email\" /> </SP>", "<RoleIdentifiers> <Attribute name=\"Role\"/> <Attribute name=\"member\"/> <Attribute name=\"memberOf\"/> </RoleIdentifiers>", "<RoleIdentifiers> </RoleIdentifiers> <RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/role-mappings.properties\"/> </RoleMappingsProvider> <IDP> </IDP>", "<RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.file.location\" value=\"/opt/mappers/roles.properties\"/> </RoleMappingsProvider>", "<RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/conf/roles.properties\"/> </RoleMappingsProvider>", "roleA=roleX,roleY roleB= kc_user=roleZ", "role\\u0020A=roleX,roleY", "<IDP entityID=\"idp\" signaturesRequired=\"true\" signatureAlgorithm=\"RSA_SHA1\" signatureCanonicalizationMethod=\"http://www.w3.org/2001/10/xml-exc-c14n#\"> </IDP>", "<AllowedClockSkew unit=\"MILLISECONDS\">3500</AllowedClockSkew>", "<SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" requestBinding=\"post\" bindingUrl=\"url\"/>", "<SingleLogoutService validateRequestSignature=\"true\" validateResponseSignature=\"true\" signRequest=\"true\" signResponse=\"true\" requestBinding=\"redirect\" responseBinding=\"post\" postBindingUrl=\"posturl\" redirectBindingUrl=\"redirecturl\">", "<IDP entityID=\"idp\"> <Keys> <Key signing=\"true\"> <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <Certificate alias=\"demo\"/> </KeyStore> </Key> </Keys> </IDP>", "<HttpClient connectionPoolSize=\"10\" disableTrustManager=\"false\" allowAnyHostname=\"false\" clientKeystore=\"classpath:keystore.jks\" clientKeystorePassword=\"pwd\" truststore=\"classpath:truststore.jks\" truststorePassword=\"pwd\" proxyUrl=\"http://proxy/\" socketTimeout=\"5000\" connectionTimeout=\"6000\" connectionTtl=\"500\" />", "cd USDEAP_HOME unzip rh-sso-saml-eap7-adapter.zip", "cd USDEAP_HOME unzip rh-sso-saml-eap6-adapter.zip", "cd USDJBOSS_HOME ./bin/jboss-cli.sh -c --file=bin/adapter-elytron-install-saml.cli", "cd USDJBOSS_HOME ./bin/jboss-cli.sh -c --file=bin/adapter-install-saml.cli", "<server xmlns=\"urn:jboss:domain:1.4\"> <extensions> <extension module=\"org.keycloak.keycloak-saml-adapter-subsystem\"/> </extensions> <profile> <subsystem xmlns=\"urn:jboss:domain:keycloak-saml:1.1\"/> </profile>", "<server xmlns=\"urn:jboss:domain:1.4\"> <subsystem xmlns=\"urn:jboss:domain:security:1.2\"> <security-domains> <security-domain name=\"keycloak\"> <authentication> <login-module code=\"org.keycloak.adapters.jboss.KeycloakLoginModule\" flag=\"required\"/> </authentication> </security-domain> </security-domains>", "samesite-cookie(mode=None, cookie-pattern=JSESSIONID)", "sudo subscription-manager repos --enable=jb-eap-7-for-rhel-<RHEL_VERSION>-server-rpms", "sudo subscription-manager repos --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install eap7-keycloak-saml-adapter-sso7_6", "sudo dnf install eap7-keycloak-adapter-sso7_6", "USDEAP_HOME/bin/jboss-cli.sh -c --file=USDEAP_HOME/bin/adapter-install-saml.cli", "sudo subscription-manager repos --enable=jb-eap-6-for-rhel-<RHEL_VERSION>-server-rpms", "sudo yum install keycloak-saml-adapter-sso7_6-eap6", "USDEAP_HOME/bin/jboss-cli.sh -c --file=USDEAP_HOME/bin/adapter-install-saml.cli", "<web-app xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd\" version=\"3.0\"> <module-name>customer-portal</module-name> <security-constraint> <web-resource-collection> <web-resource-name>Admins</web-resource-name> <url-pattern>/admin/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>Customers</web-resource-name> <url-pattern>/customers/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK-SAML</auth-method> <realm-name>this is ignored currently</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> <security-role> <role-name>user</role-name> </security-role> </web-app>", "<extensions> <extension module=\"org.keycloak.keycloak-saml-adapter-subsystem\"/> </extensions> <profile> <subsystem xmlns=\"urn:jboss:domain:keycloak-saml:1.1\"> <secure-deployment name=\"WAR MODULE NAME.war\"> <SP entityID=\"APPLICATION URL\"> </SP> </secure-deployment> </subsystem> </profile>", "<subsystem xmlns=\"urn:jboss:domain:keycloak-saml:1.1\"> <secure-deployment name=\"saml-post-encryption.war\"> <SP entityID=\"http://localhost:8080/sales-post-enc/\" sslPolicy=\"EXTERNAL\" nameIDPolicyFormat=\"urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified\" logoutPage=\"/logout.jsp\" forceAuthentication=\"false\"> <Keys> <Key signing=\"true\" encryption=\"true\"> <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <PrivateKey alias=\"http://localhost:8080/sales-post-enc/\" password=\"test123\"/> <Certificate alias=\"http://localhost:8080/sales-post-enc/\"/> </KeyStore> </Key> </Keys> <PrincipalNameMapping policy=\"FROM_NAME_ID\"/> <RoleIdentifiers> <Attribute name=\"Role\"/> </RoleIdentifiers> <IDP entityID=\"idp\"> <SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" requestBinding=\"POST\" bindingUrl=\"http://localhost:8080/auth/realms/saml-demo/protocol/saml\"/> <SingleLogoutService validateRequestSignature=\"true\" validateResponseSignature=\"true\" signRequest=\"true\" signResponse=\"true\" requestBinding=\"POST\" responseBinding=\"POST\" postBindingUrl=\"http://localhost:8080/auth/realms/saml-demo/protocol/saml\" redirectBindingUrl=\"http://localhost:8080/auth/realms/saml-demo/protocol/saml\"/> <Keys> <Key signing=\"true\" > <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <Certificate alias=\"saml-demo\"/> </KeyStore> </Key> </Keys> </IDP> </SP> </secure-deployment> </subsystem>", "<web-app xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd\" version=\"3.0\"> <module-name>customer-portal</module-name> <filter> <filter-name>Keycloak Filter</filter-name> <filter-class>org.keycloak.adapters.saml.servlet.SamlFilter</filter-class> </filter> <filter-mapping> <filter-name>Keycloak Filter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> </web-app>", "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-saml-servlet-filter-adapter</artifactId> <version>18.0.18.redhat-00001</version> </dependency>", "<filter> <filter-name>Keycloak Filter</filter-name> <filter-class>org.keycloak.adapters.saml.servlet.SamlFilter</filter-class> <init-param> <param-name>keycloak.config.resolver</param-name> <param-value>example.SamlMultiTenantResolver</param-value> </init-param> </filter>", "<context-param> <param-name>keycloak.sessionIdMapperUpdater.classes</param-name> <param-value>org.keycloak.adapters.saml.wildfly.infinispan.InfinispanSessionCacheIdMapperUpdater</param-value> </context-param>", "<context-param> <param-name>keycloak.sessionIdMapperUpdater.classes</param-name> <param-value>org.keycloak.adapters.saml.jbossweb.infinispan.InfinispanSessionCacheIdMapperUpdater</param-value> </context-param>", "package org.keycloak.adapters.saml; public class SamlPrincipal implements Serializable, Principal { /** * Get full saml assertion * * @return */ public AssertionType getAssertion() { } /** * Get SAML subject sent in assertion * * @return */ public String getSamlSubject() { } /** * Subject nameID format * * @return */ public String getNameIDFormat() { } @Override public String getName() { } /** * Convenience function that gets Attribute value by attribute name * * @param name * @return */ public List<String> getAttributes(String name) { } /** * Convenience function that gets Attribute value by attribute friendly name * * @param friendlyName * @return */ public List<String> getFriendlyAttributes(String friendlyName) { } /** * Convenience function that gets first value of an attribute by attribute name * * @param name * @return */ public String getAttribute(String name) { } /** * Convenience function that gets first value of an attribute by attribute name * * * @param friendlyName * @return */ public String getFriendlyAttribute(String friendlyName) { } /** * Get set of all assertion attribute names * * @return */ public Set<String> getAttributeNames() { } /** * Get set of all assertion friendly attribute names * * @return */ public Set<String> getFriendlyNames() { } }", "<error-page> <error-code>403</error-code> <location>/ErrorHandler</location> </error-page>", "public class SamlAuthenticationError implements AuthenticationError { public static enum Reason { EXTRACTION_FAILURE, INVALID_SIGNATURE, ERROR_STATUS } public Reason getReason() { return reason; } public StatusResponseType getStatus() { return status; } }", "package example; import java.io.InputStream; import org.keycloak.adapters.saml.SamlConfigResolver; import org.keycloak.adapters.saml.SamlDeployment; import org.keycloak.adapters.saml.config.parsers.DeploymentBuilder; import org.keycloak.adapters.saml.config.parsers.ResourceLoader; import org.keycloak.adapters.spi.HttpFacade; import org.keycloak.saml.common.exceptions.ParsingException; public class SamlMultiTenantResolver implements SamlConfigResolver { @Override public SamlDeployment resolve(HttpFacade.Request request) { String host = request.getHeader(\"Host\"); String realm = null; if (host.contains(\"tenant1\")) { realm = \"tenant1\"; } else if (host.contains(\"tenant2\")) { realm = \"tenant2\"; } else { throw new IllegalStateException(\"Not able to guess the keycloak-saml.xml to load\"); } InputStream is = getClass().getResourceAsStream(\"/\" + realm + \"-keycloak-saml.xml\"); if (is == null) { throw new IllegalStateException(\"Not able to find the file /\" + realm + \"-keycloak-saml.xml\"); } ResourceLoader loader = new ResourceLoader() { @Override public InputStream getResourceAsStream(String path) { return getClass().getResourceAsStream(path); } }; try { return new DeploymentBuilder().build(is, loader); } catch (ParsingException e) { throw new IllegalStateException(\"Cannot load SAML deployment\", e); } } }", "<web-app> <context-param> <param-name>keycloak.config.resolver</param-name> <param-value>example.SamlMultiTenantResolver</param-value> </context-param> </web-app>", "install httpd mod_auth_mellon mod_ssl openssl", "mkdir /etc/httpd/saml2", "<Location / > MellonEnable info MellonEndpointPath /mellon/ MellonSPMetadataFile /etc/httpd/saml2/mellon_metadata.xml MellonSPPrivateKeyFile /etc/httpd/saml2/mellon.key MellonSPCertFile /etc/httpd/saml2/mellon.crt MellonIdPMetadataFile /etc/httpd/saml2/idp_metadata.xml </Location> <Location /private > AuthType Mellon MellonEnable auth Require valid-user </Location>", "MellonSecureCookie On MellonCookieSameSite none", "fqdn=`hostname` mellon_endpoint_url=\"https://USD{fqdn}/mellon\" mellon_entity_id=\"USD{mellon_endpoint_url}/metadata\" file_prefix=\"USD(echo \"USDmellon_entity_id\" | sed 's/[^A-Za-z.]/_/g' | sed 's/__*/_/g')\"", "/usr/libexec/mod_auth_mellon/mellon_create_metadata.sh USDmellon_entity_id USDmellon_endpoint_url", "mv USD{file_prefix}.cert /etc/httpd/saml2/mellon.crt mv USD{file_prefix}.key /etc/httpd/saml2/mellon.key mv USD{file_prefix}.xml /etc/httpd/saml2/mellon_metadata.xml", "curl -k -o /etc/httpd/saml2/idp_metadata.xml https://USDidp_host/auth/realms/test_realm/protocol/saml/descriptor", "apachectl configtest", "systemctl restart httpd.service" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/securing_applications_and_services_guide/using_saml_to_secure_applications_and_services