title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 3. Installing Red Hat Ansible Automation Platform | Chapter 3. Installing Red Hat Ansible Automation Platform Ansible Automation Platform is a modular platform and you can deploy automation controller with other automation platform components, such as automation hub. For more information about the components provided with Ansible Automation Platform, see Red Hat Ansible Automation Platform components in the Red Hat Ansible Automation Platform Planning Guide. There are a number of supported installation scenarios for Red Hat Ansible Automation Platform. To install Red Hat Ansible Automation Platform, you must edit the inventory file parameters to specify your installation scenario using one of the following examples: Standalone automation controller with internal database Single automation controller with external (installer managed) database Single automation controller with external (customer provided) database Ansible Automation Platform with an external (installer managed) database Ansible Automation Platform with an external (customer provided) database Standalone automation hub with internal database Single automation hub with external (installer managed) database Single automation hub with external (customer provided) database LDAP configuration on private automation hub 3.1. Editing the Red Hat Ansible Automation Platform installer inventory file You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario. Procedure Navigate to the installer: [RPM installed package] USD cd /opt/ansible-automation-platform/installer/ [bundled installer] USD cd ansible-automation-platform-setup-bundle-<latest-version> [online installer] USD cd ansible-automation-platform-setup-<latest-version> Open the inventory file with a text editor. Edit inventory file parameters to specify your installation scenario. Use one of the supported Installation scenario examples to update your inventory file. Additional resources For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Inventory file variables . 3.1.1. Inventory file examples based on installation scenarios Red Hat supports several installation scenarios for Ansible Automation Platform. Review the following examples and select those suitable for your preferred installation scenario. Important For Red Hat Ansible Automation Platform or automation hub: Add an automation hub host in the [automationhub] group. For internal databases: [database] cannot be used to point to another host in the Ansible Automation Platform cluster. The database host set to be installed needs to be a unique host. Do not install automation controller and automation hub on the same node for versions of Ansible Automation Platform in a production or customer environment. This can cause contention issues and heavy resource use. Provide a reachable IP address or fully qualified domain name (FQDN) for the [automationhub] and [automationcontroller] hosts to ensure users can sync and install content from automation hub from a different node. The FQDN must not contain either the - or the _ symbols, as it will not be processed correctly. Do not use localhost . Do not use special characters for pg_password . It can cause the setup to fail. Enter your Red Hat Registry Service Account credentials in registry_username and registry_password to link to the Red Hat container registry. The inventory file variables registry_username and registry_password are only required if a non-bundle installer is used. 3.1.1.1. Standalone automation controller with internal database Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes a single automation controller node with an internal database. [automationcontroller] controller.acme.org [all:vars] admin_password='<password>' pg_host='' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key 3.1.1.2. Single automation controller with external (installer managed) database Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes a single automation controller node with an external database on a separate node. [automationcontroller] controller.acme.org [database] data.acme.org [all:vars] admin_password='<password>' pg_host='data.acme.org' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key 3.1.1.3. Single automation controller with external (customer provided) database Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes a single automation controller node with an external database on a separate node that is not managed by the platform installer. Important This example does not have a host under the database group. This indicates to the installer that the database already exists, and is being managed elsewhere. [automationcontroller] controller.acme.org [database] [all:vars] admin_password='<password>' pg_host='data.acme.org' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key 3.1.1.4. Ansible Automation Platform with an external (installer managed) database Use this example to populate the inventory file to install Ansible Automation Platform. This installation inventory file includes two automation controller nodes, two execution nodes, and automation hub with an external managed database. # Automation Controller Nodes # There are two valid node_types that can be assigned for this group. # A node_type=control implies that the node will only be able to run # project and inventory updates, but not regular jobs. # A node_type=hybrid will have the ability to run everything. # If you do not define the node_type, it defaults to hybrid. # # control.example node_type=control # hybrid.example node_type=hybrid # hybrid2.example <- this will default to hybrid [automationcontroller] controller1.acme.org node_type=control controller2.acme.org node_type=control # Execution Nodes # There are two valid node_types that can be assigned for this group. # A node_type=hop implies that the node will forward jobs to an execution node. # A node_type=execution implies that the node will be able to run jobs. # If you do not define the node_type, it defaults to execution. # # hop.example node_type=hop # execution.example node_type=execution # execution2.example <- this will default to execution [execution_nodes] execution1.acme.org node_type=execution execution2.acme.org node_type=execution [automationhub] automationhub.acme.org [database] data.acme.org [all:vars] admin_password='<password>' pg_host='data.acme.org' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # Receptor Configuration # receptor_listener_port=27199 # Automation Hub Configuration # automationhub_admin_password='<password>' automationhub_pg_host='data.acme.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password='<password>' automationhub_pg_sslmode='prefer' # The default install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key 3.1.1.5. Ansible Automation Platform with an external (customer provided) database Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes one of each node type; control, hybrid, hop, and execution, and automation hub with an external managed database that is not managed by the platform installer. Important This example does not have a host under the database group. This indicates to the installer that the database already exists, and is being managed elsewhere. # Automation Controller Nodes # There are two valid node_types that can be assigned for this group. # A node_type=control implies that the node will only be able to run # project and inventory updates, but not regular jobs. # A node_type=hybrid will have the ability to run everything. # If you do not define the node_type, it defaults to hybrid. # # control.example node_type=control # hybrid.example node_type=hybrid # hybrid2.example <- this will default to hybrid [automationcontroller] hybrid1.acme.org node_type=hybrid controller1.acme.org node_type=control # Execution Nodes # There are two valid node_types that can be assigned for this group. # A node_type=hop implies that the node will forward jobs to an execution node. # A node_type=execution implies that the node will be able to run jobs. # If you do not define the node_type, it defaults to execution. # # hop.example node_type=hop # execution.example node_type=execution # execution2.example <- this will default to execution [execution_nodes] hop1.acme.org node_type=hop execution1.acme.org node_type=execution [automationhub] automationhub.acme.org [database] [all:vars] admin_password='<password>' pg_host='data.acme.org' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # Receptor Configuration # receptor_listener_port=27199 # Automation Hub Configuration # automationhub_admin_password='<password>' automationhub_pg_host='data.acme.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password='<password>' automationhub_pg_sslmode='prefer' # The default install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key 3.1.1.6. Standalone automation hub with internal database Use this example to populate the inventory file to deploy a standalone instance of automation hub with an internal database. [automationcontroller] [automationhub] automationhub.acme.org ansible_connection=local [all:vars] registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' # The default install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key 3.1.1.7. Single automation hub with external (installer managed) database Use this example to populate the inventory file to deploy a single instance of automation hub with an external (installer managed) database. [automationcontroller] [automationhub] automationhub.acme.org [database] data.acme.org [all:vars] registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.acme.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' # The default install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key 3.1.1.8. Single automation hub with external (customer provided) database Use this example to populate the inventory file to deploy a single instance of automation hub with an external database that is not managed by the platform installer. Important This example does not have a host under the database group. This indicates to the installer that the database already exists, and is being managed elsewhere. [automationcontroller] [automationhub] automationhub.acme.org [database] [all:vars] registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.acme.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' # The default install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key 3.1.1.9. LDAP configuration on private automation hub You must set the following six variables in your Red Hat Ansible Automation Platform installer inventory file to configure your private automation hub for LDAP authentication: automationhub_authentication_backend automationhub_ldap_server_uri automationhub_ldap_bind_dn automationhub_ldap_bind_password automationhub_ldap_user_search_base_dn automationhub_ldap_group_search_base_dn If any of these variables are missing, the Ansible Automation installer will not complete the installation. 3.1.1.9.1. Setting up your inventory file variables When you configure your private automation hub with LDAP authentication, you must set the proper variables in your inventory files during the installation process. Procedure Access your inventory file according to the procedure in Editing the Red Hat Ansible Automation Platform installer inventory file . Use the following example as a guide to set up your Ansible Automation Platform inventory file: automationhub_authentication_backend = "ldap" automationhub_ldap_server_uri = "ldap://ldap:389" (for LDAPs use automationhub_ldap_server_uri = "ldaps://ldap-server-fqdn") automationhub_ldap_bind_dn = "cn=admin,dc=ansible,dc=com" automationhub_ldap_bind_password = "GoodNewsEveryone" automationhub_ldap_user_search_base_dn = "ou=people,dc=ansible,dc=com" automationhub_ldap_group_search_base_dn = "ou=people,dc=ansible,dc=com" Note The following variables will be set with default values, unless you set them with other options. auth_ldap_user_search_scope= 'SUBTREE' auth_ldap_user_search_filter= '(uid=%(user)s)' auth_ldap_group_search_scope= 'SUBTREE' auth_ldap_group_search_filter= '(objectClass=Group)' auth_ldap_group_type_class= 'django_auth_ldap.config:GroupOfNamesType' Optional: Set up extra parameters in your private automation hub such as user groups, superuser access, or mirroring. Go to Configuring extra LDAP parameters to complete this optional step. 3.1.1.9.2. Configuring extra LDAP parameters If you plan to set up superuser access, user groups, mirroring or other extra parameters, you can create a YAML file that comprises them in your ldap_extra_settings dictionary. Procedure Create a YAML file that contains ldap_extra_settings . Example: #ldapextras.yml --- ldap_extra_settings: <LDAP_parameter>: <Values> ... Add any parameters that you require for your setup. The following examples describe the LDAP parameters that you can set in ldap_extra_settings : Use this example to set up a superuser flag based on membership in an LDAP group. #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",} ... Use this example to set up superuser access. #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",} ... Use this example to mirror all LDAP groups you belong to. #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_MIRROR_GROUPS: True ... Use this example to map LDAP user attributes (such as first name, last name, and email address of the user). #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_ATTR_MAP: {"first_name": "givenName", "last_name": "sn", "email": "mail",} ... Use the following examples to grant or deny access based on LDAP group membership: To grant private automation hub access (for example, members of the cn=pah-nosoupforyou,ou=groups,dc=example,dc=com group): #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_REQUIRE_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com' ... To deny private automation hub access (for example, members of the cn=pah-nosoupforyou,ou=groups,dc=example,dc=com group): #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_REQUIRE_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com' ... Use this example to enable LDAP debug logging. #ldapextras.yml --- ldap_extra_settings: GALAXY_LDAP_LOGGING: True ... Note If it is not practical to re-run setup.sh or if debug logging is enabled for a short time, you can add a line containing GALAXY_LDAP_LOGGING: True manually to the /etc/pulp/settings.py file on private automation hub. Restart both pulpcore-api.service and nginx.service for the changes to take effect. To avoid failures due to human error, use this method only when necessary. Use this example to configure LDAP caching by setting the variable AUTH_LDAP_CACHE_TIMEOUT . #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_CACHE_TIMEOUT: 3600 ... Run setup.sh -e @ldapextras.yml during private automation hub installation. .Verification To verify you have set up correctly, confirm you can view all of your settings in the /etc/pulp/settings.py file on your private automation hub. 3.2. Running the Red Hat Ansible Automation Platform installer setup script After you update the inventory file with required parameters for installing your private automation hub, run the installer setup script. Procedure Run the setup.sh script USD sudo ./setup.sh Installation of Red Hat Ansible Automation Platform will begin. 3.3. Verifying installation of automation controller Verify that you installed automation controller successfully by logging in with the admin credentials you inserted in the inventory file. Procedure Navigate to the IP address specified for the automation controller node in the inventory file. Log in with the Admin credentials you set in the inventory file. Note The automation controller server is accessible from port 80 ( https://<CONTROLLER_SERVER_NAME>/ ) but redirects to port 443, so port 443 must also be available. Important If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal . After a successful login to automation controller, your installation of Red Hat Ansible Automation Platform 2.3 is complete. 3.3.1. Additional automation controller configuration and resources See the following resources to explore additional automation controller configurations. Table 3.1. Resources to configure automation controller Resource link Description Automation Controller Quick Setup Guide Set up automation controller and run your first playbook Automation Controller Administration Guide Configure automation controller administration through customer scripts, management jobs, etc. Configuring proxy support for Red Hat Ansible Automation Platform Set up automation controller with a proxy server Managing usability analytics and data collection from automation controller Manage what automation controller information you share with Red Hat Automation Controller User Guide Review automation controller functionality in more detail 3.4. Verifying installation of automation hub Verify that you installed your automation hub successfully by logging in with the admin credentials you inserted into the inventory file. Procedure Navigate to the IP address specified for the automation hub node in the inventory file. Log in with the Admin credentials you set in the inventory file. Important If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal . After a successful login to automation hub, your installation of Red Hat Ansible Automation Platform 2.3 is complete. 3.4.1. Additional automation hub configuration and resources See the following resources to explore additional automation hub configurations. Table 3.2. Resources to configure automation controller Resource link Description Managing user access in private automation hub Configure user access for automation hub Managing Red Hat Certified and Ansible Galaxy collections in automation hub Add content to your automation hub Publishing proprietary content collections in automation hub Publish internally developed collections on your automation hub 3.5. Post-installation steps Whether you are a new Ansible Automation Platform user looking to start automating, or an existing administrator looking to migrate old Ansible content to your latest installed version of Red Hat Ansible Automation Platform, explore the steps to begin leveraging the new features of Ansible Automation Platform 2.3: 3.5.1. Migrating data to Ansible Automation Platform 2.3 For platform administrators looking to complete an upgrade to the Ansible Automation Platform 2.3, there may be additional steps needed to migrate data to a new instance: 3.5.1.1. Migrating from legacy virtual environments (venvs) to automation execution environments Ansible Automation Platform 2.3 moves you away from custom Python virtual environments (venvs) in favor of automation execution environments - containerized images that packages the necessary components needed to execute and scale your Ansible automation. This includes Ansible Core, Ansible Content Collections, Python dependencies, Red Hat Enterprise Linux UBI 8, and any additional package dependencies. If you are looking to migrate your venvs to execution environments, you will (1) need to use the awx-manage command to list and export a list of venvs from your original instance, then (2) use ansible-builder to create execution environments. Additional resources Upgrading to Automation Execution Environments guide Creating and Consuming Execution Environments . 3.5.1.2. Migrating to Ansible Engine 2.9 images using Ansible Builder To migrate Ansible Engine 2.9 images for use with Ansible Automation Platform 2.3, the ansible-builder tool automates the process of rebuilding images (including its custom plugins and dependencies) for use with automation execution environments. Additional resources For more information on using Ansible Builder to build execution environments, see the Creating and Consuming Execution Environments . 3.5.1.3. Migrating to Ansible Core 2.13 When upgrading to Ansible Core 2.13, you need to update your playbooks, plugins, or other parts of your Ansible infrastructure in order to be supported by the latest version of Ansible Core. For instructions on updating your Ansible content for Ansible Core 2.13 compatibility, see the Ansible-core 2.13 Porting Guide . 3.5.2. Updating execution environment image locations If your private automation hub was installed separately, you can update your execution environment image locations to point to your private automation hub. Use this procedure to update your execution environment image locations. Procedure Navigate to the directory containing setup.sh Create ./group_vars/automationcontroller by running the following command: touch ./group_vars/automationcontroller Paste the following content into ./group_vars/automationcontroller , being sure to adjust the settings to fit your environment: # Automation Hub Registry registry_username: 'your-automation-hub-user' registry_password: 'your-automation-hub-password' registry_url: 'automationhub.example.org' registry_verify_ssl: False ## Execution Environments control_plane_execution_environment: 'automationhub.example.org/ee-supported-rhel8:latest' global_job_execution_environments: - name: "Default execution environment" image: "automationhub.example.org/ee-supported-rhel8:latest" - name: "Ansible Engine 2.9 execution environment" image: "automationhub.example.org/ee-29-rhel8:latest" - name: "Minimal execution environment" image: "automationhub.example.org/ee-minimal-rhel8:latest" Run the ./setup.sh script USD ./setup.sh Verification Log into Ansible Automation Platform as a user with system administrator access. Navigate to Administration Execution Environments . In the Image column, confirm that the execution environment image location has changed from the default value of <registry url>/ansible-automation-platform-<version>/<image name>:<tag> to <automation hub url>/<image name>:<tag> . 3.5.3. Scale up your automation using automation mesh The automation mesh component of the Red Hat Ansible Automation Platform simplifies the process of distributing automation across multi-site deployments. For enterprises with multiple isolated IT environments, automation mesh provides a consistent and reliable way to deploy and scale up automation across your execution nodes using a peer-to-peer mesh communication network. When upgrading from version 1.x to the latest version of the Ansible Automation Platform, you will need to migrate the data from your legacy isolated nodes into execution nodes necessary for automation mesh. You can implement automation mesh by planning out a network of hybrid and control nodes, then editing the inventory file found in the Ansible Automation Platform installer to assign mesh-related values to each of your execution nodes. For instructions on how to migrate from isolated nodes to execution nodes, see the Red Hat Ansible Automation Platform Upgrade and Migration Guide . For information about automation mesh and the various ways to design your automation mesh for your environment, see the Red Hat Ansible Automation Platform automation mesh guide . | [
"cd /opt/ansible-automation-platform/installer/",
"cd ansible-automation-platform-setup-bundle-<latest-version>",
"cd ansible-automation-platform-setup-<latest-version>",
"[automationcontroller] controller.acme.org [all:vars] admin_password='<password>' pg_host='' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key",
"[automationcontroller] controller.acme.org [database] data.acme.org [all:vars] admin_password='<password>' pg_host='data.acme.org' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key",
"[automationcontroller] controller.acme.org [database] [all:vars] admin_password='<password>' pg_host='data.acme.org' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key",
"Automation Controller Nodes There are two valid node_types that can be assigned for this group. A node_type=control implies that the node will only be able to run project and inventory updates, but not regular jobs. A node_type=hybrid will have the ability to run everything. If you do not define the node_type, it defaults to hybrid. # control.example node_type=control hybrid.example node_type=hybrid hybrid2.example <- this will default to hybrid [automationcontroller] controller1.acme.org node_type=control controller2.acme.org node_type=control Execution Nodes There are two valid node_types that can be assigned for this group. A node_type=hop implies that the node will forward jobs to an execution node. A node_type=execution implies that the node will be able to run jobs. If you do not define the node_type, it defaults to execution. # hop.example node_type=hop execution.example node_type=execution execution2.example <- this will default to execution [execution_nodes] execution1.acme.org node_type=execution execution2.acme.org node_type=execution [automationhub] automationhub.acme.org [database] data.acme.org [all:vars] admin_password='<password>' pg_host='data.acme.org' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' Receptor Configuration # receptor_listener_port=27199 Automation Hub Configuration # automationhub_admin_password='<password>' automationhub_pg_host='data.acme.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password='<password>' automationhub_pg_sslmode='prefer' The default install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key",
"Automation Controller Nodes There are two valid node_types that can be assigned for this group. A node_type=control implies that the node will only be able to run project and inventory updates, but not regular jobs. A node_type=hybrid will have the ability to run everything. If you do not define the node_type, it defaults to hybrid. # control.example node_type=control hybrid.example node_type=hybrid hybrid2.example <- this will default to hybrid [automationcontroller] hybrid1.acme.org node_type=hybrid controller1.acme.org node_type=control Execution Nodes There are two valid node_types that can be assigned for this group. A node_type=hop implies that the node will forward jobs to an execution node. A node_type=execution implies that the node will be able to run jobs. If you do not define the node_type, it defaults to execution. # hop.example node_type=hop execution.example node_type=execution execution2.example <- this will default to execution [execution_nodes] hop1.acme.org node_type=hop execution1.acme.org node_type=execution [automationhub] automationhub.acme.org [database] [all:vars] admin_password='<password>' pg_host='data.acme.org' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' Receptor Configuration # receptor_listener_port=27199 Automation Hub Configuration # automationhub_admin_password='<password>' automationhub_pg_host='data.acme.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password='<password>' automationhub_pg_sslmode='prefer' The default install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key",
"[automationcontroller] [automationhub] automationhub.acme.org ansible_connection=local [all:vars] registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' The default install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key",
"[automationcontroller] [automationhub] automationhub.acme.org [database] data.acme.org [all:vars] registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.acme.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' The default install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key",
"[automationcontroller] [automationhub] automationhub.acme.org [database] [all:vars] registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.acme.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' The default install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key",
"automationhub_authentication_backend = \"ldap\" automationhub_ldap_server_uri = \"ldap://ldap:389\" (for LDAPs use automationhub_ldap_server_uri = \"ldaps://ldap-server-fqdn\") automationhub_ldap_bind_dn = \"cn=admin,dc=ansible,dc=com\" automationhub_ldap_bind_password = \"GoodNewsEveryone\" automationhub_ldap_user_search_base_dn = \"ou=people,dc=ansible,dc=com\" automationhub_ldap_group_search_base_dn = \"ou=people,dc=ansible,dc=com\"",
"auth_ldap_user_search_scope= 'SUBTREE' auth_ldap_user_search_filter= '(uid=%(user)s)' auth_ldap_group_search_scope= 'SUBTREE' auth_ldap_group_search_filter= '(objectClass=Group)' auth_ldap_group_type_class= 'django_auth_ldap.config:GroupOfNamesType'",
"#ldapextras.yml --- ldap_extra_settings: <LDAP_parameter>: <Values>",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {\"is_superuser\": \"cn=pah-admins,ou=groups,dc=example,dc=com\",}",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {\"is_superuser\": \"cn=pah-admins,ou=groups,dc=example,dc=com\",}",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_MIRROR_GROUPS: True",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_ATTR_MAP: {\"first_name\": \"givenName\", \"last_name\": \"sn\", \"email\": \"mail\",}",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_REQUIRE_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com'",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_REQUIRE_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com'",
"#ldapextras.yml --- ldap_extra_settings: GALAXY_LDAP_LOGGING: True",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_CACHE_TIMEOUT: 3600",
"sudo ./setup.sh",
"touch ./group_vars/automationcontroller",
"Automation Hub Registry registry_username: 'your-automation-hub-user' registry_password: 'your-automation-hub-password' registry_url: 'automationhub.example.org' registry_verify_ssl: False ## Execution Environments control_plane_execution_environment: 'automationhub.example.org/ee-supported-rhel8:latest' global_job_execution_environments: - name: \"Default execution environment\" image: \"automationhub.example.org/ee-supported-rhel8:latest\" - name: \"Ansible Engine 2.9 execution environment\" image: \"automationhub.example.org/ee-29-rhel8:latest\" - name: \"Minimal execution environment\" image: \"automationhub.example.org/ee-minimal-rhel8:latest\"",
"./setup.sh"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_installation_guide/assembly-platform-install-scenario |
Chapter 13. Installing a three-node cluster on GCP | Chapter 13. Installing a three-node cluster on GCP In OpenShift Container Platform version 4.17, you can install a three-node cluster on Google Cloud Platform (GCP). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. 13.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 13.2. steps Installing a cluster on GCP with customizations Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_gcp/installing-gcp-three-node |
11.7. Importing Existing Storage Domains | 11.7. Importing Existing Storage Domains 11.7.1. Overview of Importing Existing Storage Domains Aside from adding new storage domains, which contain no data, you can import existing storage domains and access the data they contain. By importing storage domains, you can recover data in the event of a failure in the Manager database, and migrate data from one data center or environment to another. The following is an overview of importing each storage domain type: Data Importing an existing data storage domain allows you to access all of the virtual machines and templates that the data storage domain contains. After you import the storage domain, you must manually import virtual machines, floating disk images, and templates into the destination data center. The process for importing the virtual machines and templates that a data storage domain contains is similar to that for an export storage domain. However, because data storage domains contain all the virtual machines and templates in a given data center, importing data storage domains is recommended for data recovery or large-scale migration of virtual machines between data centers or environments. Important You can import existing data storage domains that were attached to data centers with the correct supported compatibility level. See Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions for more information. ISO Importing an existing ISO storage domain allows you to access all of the ISO files and virtual diskettes that the ISO storage domain contains. No additional action is required after importing the storage domain to access these resources; you can attach them to virtual machines as required. Export Importing an existing export storage domain allows you to access all of the virtual machine images and templates that the export storage domain contains. Because export domains are designed for exporting and importing virtual machine images and templates, importing export storage domains is recommended method of migrating small numbers of virtual machines and templates inside an environment or between environments. For information on exporting and importing virtual machines and templates to and from export storage domains, see Exporting and Importing Virtual Machines and Templates in the Virtual Machine Management Guide . Note The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. Warning Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains. 11.7.2. Importing storage domains Import a storage domain that was previously attached to a data center in the same environment or in a different environment. This procedure assumes the storage domain is no longer attached to any data center in any environment, to avoid data corruption. To import and attach an existing data storage domain to a data center, the target data center must be initialized. Procedure Click Storage Domains . Click Import Domain . Select the Data Center you want to import the storage domain to. Enter a Name for the storage domain. Select the Domain Function and Storage Type from the drop-down lists. Select a host from the Host drop-down list. Important All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured. Enter the details of the storage domain. Note The fields for specifying the details of the storage domain change depending on the values you select in the Domain Function and Storage Type lists. These fields are the same as those available for adding a new storage domain. Select the Activate Domain in Data Center check box to activate the storage domain after attaching it to the selected data center. Click OK . You can now import virtual machines and templates from the storage domain to the data center. Warning Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains. Related information Section 11.7.5, "Importing Virtual Machines from Imported Data Storage Domains" Section 11.7.6, "Importing Templates from Imported Data Storage Domains" 11.7.3. Migrating Storage Domains between Data Centers in the Same Environment Migrate a storage domain from one data center to another in the same Red Hat Virtualization environment to allow the destination data center to access the data contained in the storage domain. This procedure involves detaching the storage domain from one data center, and attaching it to a different data center. Procedure Shut down all virtual machines running on the required storage domain. Click Storage Domains . Click the storage domain's name to open the details view. Click the Data Center tab. Click Maintenance , then click OK . Click Detach , then click OK . Click Attach . Select the destination data center and click OK . The storage domain is attached to the destination data center and is automatically activated. You can now import virtual machines and templates from the storage domain to the destination data center. Warning Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains. 11.7.4. Migrating Storage Domains between Data Centers in Different Environments Migrate a storage domain from one Red Hat Virtualization environment to another to allow the destination environment to access the data contained in the storage domain. This procedure involves removing the storage domain from one Red Hat Virtualization environment, and importing it into a different environment. To import and attach an existing data storage domain to a Red Hat Virtualization data center, the storage domain's source data center must have the correct supported compatibility level. See Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions for more information. Procedure Log in to the Administration Portal of the source environment. Shut down all virtual machines running on the required storage domain. Click Storage Domains . Click the storage domain's name to open the details view. Click the Data Center tab. Click Maintenance . Important Do not check the Ignore OVF update failure checkbox. The maintenance operation on the storage domain should update the OVF. Click OK . Click Detach , then click OK . Click Remove . In the Remove Storage(s) window, ensure the Format Domain, i.e. Storage Content will be lost! check box is not selected. This step preserves the data in the storage domain for later use. Click OK to remove the storage domain from the source environment. Log in to the Administration Portal of the destination environment. Click Storage Domains . Click Import Domain . Select the destination data center from the Data Center drop-down list. Enter a name for the storage domain. Select the Domain Function and Storage Type from the appropriate drop-down lists. Select a host from the Host drop-down list. Enter the details of the storage domain. Note The fields for specifying the details of the storage domain change depending on the value you select in the Storage Type drop-down list. These fields are the same as those available for adding a new storage domain. Select the Activate Domain in Data Center check box to automatically activate the storage domain when it is attached. Click OK . The storage domain is attached to the destination data center in the new Red Hat Virtualization environment and is automatically activated. You can now import virtual machines and templates from the imported storage domain to the destination data center. Warning Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains. 11.7.5. Importing Virtual Machines from Imported Data Storage Domains Import a virtual machine into one or more clusters from a data storage domain you have imported into your Red Hat Virtualization environment. This procedure assumes that the imported data storage domain has been attached to a data center and has been activated. Procedure Click Storage Domains . Click the imported storage domain's name to open the details view. Click the VM Import tab. Select one or more virtual machines to import. Click Import . For each virtual machine in the Import Virtual Machine(s) window, ensure the correct target cluster is selected in the Cluster list. Map external virtual machine vNIC profiles to profiles that are present on the target cluster(s): Click vNic Profiles Mapping . Select the vNIC profile to use from the Target vNic Profile drop-down list. If multiple target clusters are selected in the Import Virtual Machine(s) window, select each target cluster in the Target Cluster drop-down list and ensure the mappings are correct. Click OK . If a MAC address conflict is detected, an exclamation mark appears to the name of the virtual machine. Mouse over the icon to view a tooltip displaying the type of error that occurred. Select the Reassign Bad MACs check box to reassign new MAC addresses to all problematic virtual machines. Alternatively, you can select the Reassign check box per virtual machine. Note If there are no available addresses to assign, the import operation will fail. However, in the case of MAC addresses that are outside the cluster's MAC address pool range, it is possible to import the virtual machine without reassigning a new MAC address. Click OK . The imported virtual machines no longer appear in the list under the VM Import tab. 11.7.6. Importing Templates from Imported Data Storage Domains Import a template from a data storage domain you have imported into your Red Hat Virtualization environment. This procedure assumes that the imported data storage domain has been attached to a data center and has been activated. Procedure Click Storage Domains . Click the imported storage domain's name to open the details view. Click the Template Import tab. Select one or more templates to import. Click Import . For each template in the Import Templates(s) window, ensure the correct target cluster is selected in the Cluster list. Map external virtual machine vNIC profiles to profiles that are present on the target cluster(s): Click vNic Profiles Mapping . Select the vNIC profile to use from the Target vNic Profile drop-down list. If multiple target clusters are selected in the Import Templates window, select each target cluster in the Target Cluster drop-down list and ensure the mappings are correct. Click OK . Click OK . The imported templates no longer appear in the list under the Template Import tab. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Importing_Existing_Storage_Domains |
Chapter 3. Downloading Large Language models | Chapter 3. Downloading Large Language models Red Hat Enterprise Linux AI allows you to customize or chat with various Large Language Models (LLMs) provided and built by Red Hat and IBM. You can download these models from the Red Hat RHEL AI registry. You can upload any custom model to an S3 bucket. Table 3.1. Red Hat Enterprise Linux AI version 1.4 LLMs Large Language Models (LLMs) Type Size Purpose Model family NVIDIA Accelerator Support granite-7b-starter Base model 12.6 GB Base model for customizing and fine-tuning Granite 2 Not available granite-7b-redhat-lab LAB fine-tuned Granite model 12.6 GB Granite model for inference serving Granite 2 Not available granite-8b-starter-v1 Base model 16.0 GB Base model for customizing and fine-tuning Granite 3 General availability granite-8b-lab-v1 LAB fine-tuned granite model 16.0 GB Granite model for inference serving Granite 3 General availability granite-8b-lab-v2-preview LAB fine-tuned granite model 16.0 GB Preview of the version 2 8b Granite model for inference serving Granite 3 Technology preview granite-3.1-8b-starter-v1 LAB fine-tuned granite model 16.0 GB Version 1 of the Granite 3.1 base model for customizing and fine-tuning Granite 3.1 General availability granite-3.1-8b-lab-v1 LAB fine-tuned granite model 16.0 GB Version 1 of the Granite 3.1 model for inference serving Granite 3.1 General availability granite-8b-code-instruct LAB fine-tuned granite code model 15.0 GB LAB fine-tuned granite code model for inference serving Granite Code models Technology preview granite-8b-code-base Granite fine-tuned code model 15.0 GB Granite code model for inference serving Granite Code models Technology preview mixtral-8x7b-instruct-v0-1 Teacher/critic model 87.0 GB Teacher and critic model for running Synthetic data generation (SDG) Mixtral General availability prometheus-8x7b-v2-0 Evaluation judge model 87.0 GB Judge model for multi-phase training and evaluation Prometheus 2 General availability Important Using the `granite-8b-code-instruct` or `granite-8b-code-base` Large Language models (LLMS) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Models required for customizing the Granite LLM The granite-7b-starter or granite-8b-starter-v1 base LLM depending on your hardware vendor. The mixtral-8x7b-instruct-v0-1 teacher model for SDG. The prometheus-8x7b-v2-0 judge model for training and evaluation. Additional tools required for customizing an LLM The Low-rank adaptation (LoRA) adaptors enhance the efficiency of the Synthetic Data Generation (SDG) process. The skills-adapter-v3 LoRA layered skills adapter for SDG. The knowledge-adapter-v3 LoRA layered knowledge adapter for SDG. Example command for downloading the adaptors Important The LoRA layered adapters do not show up in the output of the ilab model list command. You can see the skills-adapter-v3 and knowledge-adapter-v3 files in the ls ~/.cache/instructlab/models folder. 3.1. Downloading the models from a Red Hat repository You can download the additional optional models created by Red Hat and IBM. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You created a Red Hat registry account and logged in on your machine. You have root user access on your machine. Procedure To download the additional LLM models, run the following command: USD ilab model download --repository docker://<repository_and_model> --release <release> where: <repository_and_model> Specifies the repository location of the model as well as the model. You can access the models from the registry.redhat.io/rhelai1/ repository. <release> Specifies the version of the model. Set to 1.4 for the models that are supported on RHEL AI version 1.4. Set to latest for the latest version of the model. Example command USD ilab model download --repository docker://registry.redhat.io/rhelai1/granite-3.1-8b-starter-v1 --release latest Verification You can view all the downloaded models, including the new models after training, on your system with the following command: USD ilab model list Example output You can also list the downloaded models in the ls ~/.cache/instructlab/models folder by running the following command: USD ls ~/.cache/instructlab/models Example output granite-3.1-8b-starter-v1 granite-3.1-8b-lab-v1 | [
"ilab model download --repository docker://registry.redhat.io/rhelai1/knowledge-adapter-v3 --release latest",
"ilab model download --repository docker://<repository_and_model> --release <release>",
"ilab model download --repository docker://registry.redhat.io/rhelai1/granite-3.1-8b-starter-v1 --release latest",
"ilab model list",
"+-----------------------------------+---------------------+---------+ | Model Name | Last Modified | Size | +-----------------------------------+---------------------+---------+ | models/prometheus-8x7b-v2-0 | 2024-08-09 13:28:50 | 87.0 GB| | models/mixtral-8x7b-instruct-v0-1 | 2024-08-09 13:28:24 | 87.0 GB| | models/granite-3.1-8b-starter-v1 | 2024-08-09 14:28:40 | 16.6 GB| | models/granite-3.1-8b-lab-v1 | 2024-08-09 14:40:35 | 16.6 GB| +-----------------------------------+---------------------+---------+",
"ls ~/.cache/instructlab/models",
"granite-3.1-8b-starter-v1 granite-3.1-8b-lab-v1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/building_and_maintaining_your_rhel_ai_environment/download_models |
7.7. Using Channel Bonding | 7.7. Using Channel Bonding To enhance performance, adjust available module options to ascertain what combination works best. Pay particular attention to the miimon or arp_interval and the arp_ip_target parameters. See Section 7.7.1, "Bonding Module Directives" for a list of available options and how to quickly determine the best ones for your bonded interface. 7.7.1. Bonding Module Directives It is a good idea to test which channel bonding module parameters work best for your bonded interfaces before adding them to the BONDING_OPTS=" bonding parameters " directive in your bonding interface configuration file ( ifcfg-bond0 for example). Parameters to bonded interfaces can be configured without unloading (and reloading) the bonding module by manipulating files in the sysfs file system. sysfs is a virtual file system that represents kernel objects as directories, files and symbolic links. sysfs can be used to query for information about kernel objects, and can also manipulate those objects through the use of normal file system commands. The sysfs virtual file system is mounted under the /sys/ directory. All bonding interfaces can be configured dynamically by interacting with and manipulating files under the /sys/class/net/ directory. In order to determine the best parameters for your bonding interface, create a channel bonding interface file such as ifcfg-bond0 by following the instructions in Section 7.4.2, "Create a Channel Bonding Interface" . Insert the SLAVE=yes and MASTER=bond0 directives in the configuration files for each interface bonded to bond0 . Once this is completed, you can proceed to testing the parameters. First, open the bond you created by running ifup bond N as root : If you have correctly created the ifcfg-bond0 bonding interface file, you will be able to see bond0 listed in the output of running ip link show as root : To view all existing bonds, even if they are not up, run: You can configure each bond individually by manipulating the files located in the /sys/class/net/bond N /bonding/ directory. First, the bond you are configuring must be taken down: As an example, to enable MII monitoring on bond0 with a 1 second interval, run as root : To configure bond0 for balance-alb mode, run either: ...or, using the name of the mode: After configuring options for the bond in question, you can bring it up and test it by running ifup bond N . If you decide to change the options, take the interface down, modify its parameters using sysfs , bring it back up, and re-test. Once you have determined the best set of parameters for your bond, add those parameters as a space-separated list to the BONDING_OPTS= directive of the /etc/sysconfig/network-scripts/ifcfg-bond N file for the bonding interface you are configuring. Whenever that bond is brought up (for example, by the system during the boot sequence if the ONBOOT=yes directive is set), the bonding options specified in the BONDING_OPTS will take effect for that bond. The following list provides the names of many of the more common channel bonding parameters, along with a description of what they do. For more information, see the brief descriptions for each parm in modinfo bonding output, or for more detailed information, see https://www.kernel.org/doc/Documentation/networking/bonding.txt . Bonding Interface Parameters ad_select= value Specifies the 802.3ad aggregation selection logic to use. Possible values are: stable or 0 - Default setting. The active aggregator is chosen by largest aggregate bandwidth. Reselection of the active aggregator occurs only when all ports of the active aggregator are down or if the active aggregator has no ports. bandwidth or 1 - The active aggregator is chosen by largest aggregate bandwidth. Reselection occurs if: A port is added to or removed from the bond; Any port's link state changes; Any port's 802.3ad association state changes; The bond's administrative state changes to up. count or 2 - The active aggregator is chosen by the largest number of ports. Reselection occurs as described for the bandwidth setting above. The bandwidth and count selection policies permit failover of 802.3ad aggregations when partial failure of the active aggregator occurs. This keeps the aggregator with the highest availability, either in bandwidth or in number of ports, active at all times. arp_interval= time_in_milliseconds Specifies, in milliseconds, how often ARP monitoring occurs. Important It is essential that both arp_interval and arp_ip_target parameters are specified, or, alternatively, the miimon parameter is specified. Failure to do so can cause degradation of network performance in the event that a link fails. If using this setting while in mode=0 or mode=2 (the two load-balancing modes), the network switch must be configured to distribute packets evenly across the NICs. For more information on how to accomplish this, see https://www.kernel.org/doc/Documentation/networking/bonding.txt . The value is set to 0 by default, which disables it. arp_ip_target= ip_address [ , ip_address_2 ,... ip_address_16 ] Specifies the target IP address of ARP requests when the arp_interval parameter is enabled. Up to 16 IP addresses can be specified in a comma separated list. arp_validate= value Validate source/distribution of ARP probes; default is none . Other valid values are active , backup , and all . downdelay= time_in_milliseconds Specifies (in milliseconds) how long to wait after link failure before disabling the link. The value must be a multiple of the value specified in the miimon parameter. The value is set to 0 by default, which disables it. fail_over_mac= value Specifies whether active-backup mode should set all ports to the same MAC address at the point of assignment (the traditional behavior), or, when enabled, perform special handling of the bond's MAC address in accordance with the selected policy. Possible values are: none or 0 - Default setting. This setting disables fail_over_mac , and causes bonding to set all ports of an active-backup bond to the same MAC address at the point of assignment. active or 1 - The " active " fail_over_mac policy indicates that the MAC address of the bond should always be the MAC address of the currently active port. The MAC address of the ports is not changed; instead, the MAC address of the bond changes during a failover. This policy is useful for devices that cannot ever alter their MAC address, or for devices that refuse incoming broadcasts with their own source MAC (which interferes with the ARP monitor). The disadvantage of this policy is that every device on the network must be updated by gratuitous ARP, as opposed to the normal method of switches snooping incoming traffic to update their ARP tables. If the gratuitous ARP is lost, communication may be disrupted. When this policy is used in conjunction with the MII monitor, devices which assert link up prior to being able to actually transmit and receive are particularly susceptible to loss of the gratuitous ARP, and an appropriate updelay setting may be required. follow or 2 - The " follow " fail_over_mac policy causes the MAC address of the bond to be selected normally (normally the MAC address of the first port added to the bond). However, the second and subsequent ports are not set to this MAC address while they are in a backup role; a port is programmed with the bond's MAC address at failover time (and the formerly active port receives the newly active port's MAC address). This policy is useful for multiport devices that either become confused or incur a performance penalty when multiple ports are programmed with the same MAC address. lacp_rate= value Specifies the rate at which link partners should transmit LACPDU packets in 802.3ad mode. Possible values are: slow or 0 - Default setting. This specifies that partners should transmit LACPDUs every 30 seconds. fast or 1 - Specifies that partners should transmit LACPDUs every 1 second. miimon= time_in_milliseconds Specifies (in milliseconds) how often MII link monitoring occurs. This is useful if high availability is required because MII is used to verify that the NIC is active. To verify that the driver for a particular NIC supports the MII tool, type the following command as root: In this command, replace interface_name with the name of the device interface, such as enp1s0 , not the bond interface. If MII is supported, the command returns: If using a bonded interface for high availability, the module for each NIC must support MII. Setting the value to 0 (the default), turns this feature off. When configuring this setting, a good starting point for this parameter is 100 . Important It is essential that both arp_interval and arp_ip_target parameters are specified, or, alternatively, the miimon parameter is specified. Failure to do so can cause degradation of network performance in the event that a link fails. mode= value Allows you to specify the bonding policy. The value can be one of: balance-rr or 0 - Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded port interface beginning with the first one available. active-backup or 1 - Sets an active-backup policy for fault tolerance. Transmissions are received and sent out through the first available bonded port interface. Another bonded port interface is only used if the active bonded port interface fails. balance-xor or 2 - Transmissions are based on the selected hash policy. The default is to derive a hash by XOR of the source and destination MAC addresses multiplied by the modulo of the number of port interfaces. In this mode traffic destined for specific peers will always be sent over the same interface. As the destination is determined by the MAC addresses this method works best for traffic to peers on the same link or local network. If traffic has to pass through a single router then this mode of traffic balancing will be suboptimal. broadcast or 3 - Sets a broadcast policy for fault tolerance. All transmissions are sent on all port interfaces. 802.3ad or 4 - Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all ports in the active aggregator. Requires a switch that is 802.3ad compliant. balance-tlb or 5 - Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each port interface. Incoming traffic is received by the current port. If the receiving port fails, another port takes over the MAC address of the failed port. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. balance-alb or 6 - Sets an Adaptive Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for IPv4 traffic. Receive load balancing is achieved through ARP negotiation. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. For details about required settings on the upstream switch, see Section 7.6, "Overview of Bonding Modes and the Required Settings on the Switch" . primary= interface_name Specifies the interface name, such as enp1s0 , of the primary device. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode. See https://www.kernel.org/doc/Documentation/networking/bonding.txt for more information. primary_reselect= value Specifies the reselection policy for the primary port. This affects how the primary port is chosen to become the active port when failure of the active port or recovery of the primary port occurs. This parameter is designed to prevent flip-flopping between the primary port and other ports. Possible values are: always or 0 (default) - The primary port becomes the active port whenever it comes back up. better or 1 - The primary port becomes the active port when it comes back up, if the speed and duplex of the primary port is better than the speed and duplex of the current active port. failure or 2 - The primary port becomes the active port only if the current active port fails and the primary port is up. The primary_reselect setting is ignored in two cases: If no ports are active, the first port to recover is made the active port. When initially assigned to a bond, the primary port is always made the active port. Changing the primary_reselect policy through sysfs will cause an immediate selection of the best active port according to the new policy. This may or may not result in a change of the active port, depending upon the circumstances resend_igmp= range Specifies the number of IGMP membership reports to be issued after a failover event. One membership report is issued immediately after the failover, subsequent packets are sent in each 200ms interval. The valid range is 0 to 255 ; the default value is 1 . A value of 0 prevents the IGMP membership report from being issued in response to the failover event. This option is useful for bonding modes balance-rr (mode 0), active-backup (mode 1), balance-tlb (mode 5) and balance-alb (mode 6), in which a failover can switch the IGMP traffic from one port to another. Therefore a fresh IGMP report must be issued to cause the switch to forward the incoming IGMP traffic over the newly selected port. updelay= time_in_milliseconds Specifies (in milliseconds) how long to wait before enabling a link. The value must be a multiple of the value specified in the miimon parameter. The value is set to 0 by default, which disables it. use_carrier= number Specifies whether or not miimon should use MII/ETHTOOL ioctls or netif_carrier_ok() to determine the link state. The netif_carrier_ok() function relies on the device driver to maintains its state with netif_carrier_ on/off ; most device drivers support this function. The MII/ETHTOOL ioctls tools utilize a deprecated calling sequence within the kernel. However, this is still configurable in case your device driver does not support netif_carrier_ on/off . Valid values are: 1 - Default setting. Enables the use of netif_carrier_ok() . 0 - Enables the use of MII/ETHTOOL ioctls. Note If the bonding interface insists that the link is up when it should not be, it is possible that your network device driver does not support netif_carrier_ on/off . xmit_hash_policy= value Selects the transmit hash policy used for port selection in balance-xor and 802.3ad modes. Possible values are: 0 or layer2 - Default setting. This parameter uses the XOR of hardware MAC addresses to generate the hash. The formula used is: This algorithm will place all traffic to a particular network peer on the same port, and is 802.3ad compliant. 1 or layer3+4 - Uses upper layer protocol information (when available) to generate the hash. This allows for traffic to a particular network peer to span multiple ports, although a single connection will not span multiple ports. The formula for unfragmented TCP and UDP packets used is: For fragmented TCP or UDP packets and all other IP protocol traffic, the source and destination port information is omitted. For non- IP traffic, the formula is the same as the layer2 transmit hash policy. This policy intends to mimic the behavior of certain switches; particularly, Cisco switches with PFC2 as well as some Foundry and IBM products. The algorithm used by this policy is not 802.3ad compliant. 2 or layer2+3 - Uses a combination of layer2 and layer3 protocol information to generate the hash. Uses XOR of hardware MAC addresses and IP addresses to generate the hash. The formula is: This algorithm will place all traffic to a particular network peer on the same port. For non- IP traffic, the formula is the same as for the layer2 transmit hash policy. This policy is intended to provide a more balanced distribution of traffic than layer2 alone, especially in environments where a layer3 gateway device is required to reach most destinations. This algorithm is 802.3ad compliant. | [
"~]# ifup bond0",
"~]# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 link/ether 52:54:00:e9:ce:d2 brd ff:ff:ff:ff:ff:ff 3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 link/ether 52:54:00:38:a6:4c brd ff:ff:ff:ff:ff:ff 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 52:54:00:38:a6:4c brd ff:ff:ff:ff:ff:ff",
"~]USD cat /sys/class/net/bonding_masters bond0",
"~]# ifdown bond0",
"~]# echo 1000 > /sys/class/net/bond0/bonding/miimon",
"~]# echo 6 > /sys/class/net/bond0/bonding/mode",
"~]# echo balance-alb > /sys/class/net/bond0/bonding/mode",
"~]# ethtool interface_name | grep \"Link detected:\"",
"Link detected: yes",
"( source_MAC_address XOR destination_MAC ) MODULO slave_count",
"(( source_port XOR dest_port ) XOR (( source_IP XOR dest_IP ) AND 0xffff ) MODULO slave_count",
"((( source_IP XOR dest_IP ) AND 0xffff ) XOR ( source_MAC XOR destination_MAC )) MODULO slave_count"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-using_channel_bonding |
Chapter 4. Deployments | Chapter 4. Deployments 4.1. Understanding Deployment and DeploymentConfig objects The Deployment and DeploymentConfig API objects in OpenShift Container Platform provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects: A DeploymentConfig or Deployment object, either of which describes the desired state of a particular component of the application as a pod template. DeploymentConfig objects involve one or more replication controllers , which contain a point-in-time record of the state of a deployment as a pod template. Similarly, Deployment objects involve one or more replica sets , a successor of replication controllers. One or more pods, which represent an instance of a particular version of an application. 4.1.1. Building blocks of a deployment Deployments and deployment configs are enabled by the use of native Kubernetes API objects ReplicaSet and ReplicationController , respectively, as their building blocks. Users do not have to manipulate replication controllers, replica sets, or pods owned by DeploymentConfig objects or deployments. The deployment systems ensure changes are propagated appropriately. Tip If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy. The following sections provide further details on these objects. 4.1.1.1. Replication controllers A replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller acts to instantiate more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. A replication controller configuration consists of: The number of replicas desired, which can be adjusted at run time. A Pod definition to use when creating a replicated pod. A selector for identifying managed pods. A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the Pod definition that the replication controller instantiates. The replication controller uses the selector to determine how many instances of the pod are already running in order to adjust as needed. The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler. The following is an example definition of a replication controller: apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod the controller creates. 4 Labels on the pod should include those from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 4.1.1.2. Replica sets Similar to a replication controller, a ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements. Note Only use replica sets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create. The following is an example ReplicaSet definition: apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. 2 Equality-based selector to specify resources with labels that match the selector. 3 Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend . 4.1.2. DeploymentConfig objects Building on replication controllers, OpenShift Container Platform adds expanded support for the software development and deployment lifecycle with the concept of DeploymentConfig objects. In the simplest case, a DeploymentConfig object creates a new replication controller and lets it start up pods. However, OpenShift Container Platform deployments from DeploymentConfig objects also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller. The DeploymentConfig deployment system provides the following capabilities: A DeploymentConfig object, which is a template for running applications. Triggers that drive automated deployments in response to events. User-customizable deployment strategies to transition from the version to the new version. A strategy runs inside a pod commonly referred as the deployment process. A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment. Versioning of your application to support rollbacks either manually or automatically in case of deployment failure. Manual replication scaling and autoscaling. When you create a DeploymentConfig object, a replication controller is created representing the DeploymentConfig object's pod template. If the deployment changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new one. Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally. The OpenShift Container Platform DeploymentConfig object defines the following details: The elements of a ReplicationController definition. Triggers for creating a new deployment automatically. The strategy for transitioning between deployments. Lifecycle hooks. Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment to retain its logs of the deployment. When a deployment is superseded by another, the replication controller is retained to enable easy rollback if needed. Example DeploymentConfig definition apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3 1 A configuration change trigger results in a new replication controller whenever changes are detected in the pod template of the deployment configuration. 2 An image change trigger causes a new deployment to be created each time a new version of the backing image is available in the named image stream. 3 The default Rolling strategy makes a downtime-free transition between deployments. 4.1.3. Deployments Kubernetes provides a first-class, native API object type in OpenShift Container Platform called Deployment . Deployment objects serve as a descendant of the OpenShift Container Platform-specific DeploymentConfig object. Like DeploymentConfig objects, Deployment objects describe the desired state of a particular component of an application as a pod template. Deployments create replica sets, which orchestrate pod lifecycles. For example, the following deployment definition creates a replica set to bring up one hello-openshift pod: Deployment definition apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80 4.1.4. Comparing Deployment and DeploymentConfig objects Both Kubernetes Deployment objects and OpenShift Container Platform-provided DeploymentConfig objects are supported in OpenShift Container Platform; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. The following sections go into more detail on the differences between the two object types to further help you decide which type to use. 4.1.4.1. Design One important difference between Deployment and DeploymentConfig objects is the properties of the CAP theorem that each design has chosen for the rollout process. DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency. For DeploymentConfig objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding pod. This means that you can not delete the pod to unstick the rollout, as the kubelet is responsible for deleting the associated pod. However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs. 4.1.4.2. DeploymentConfig object-specific features Automatic rollbacks Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure. Triggers Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment: USD oc rollout pause deployments/<name> Lifecycle hooks Deployments do not yet support any lifecycle hooks. Custom strategies Deployments do not support user-specified custom deployment strategies yet. 4.1.4.3. Deployment-specific features Rollover The deployment process for Deployment objects is driven by a controller loop, in contrast to DeploymentConfig objects which use deployer pods for every new rollout. This means that the Deployment object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one. DeploymentConfig objects can have at most one deployer pod running, otherwise multiple deployers end up conflicting while trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this translates to faster rapid rollouts for Deployment objects. Proportional scaling Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a Deployment object, it is able to scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set. DeploymentConfig objects cannot be scaled when a rollout is ongoing because the controller will end up having issues with the deployer process about the size of the new replication controller. Pausing mid-rollout Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. On the other hand, you cannot pause deployer pods currently, so if you try to pause a deployment in the middle of a rollout, the deployer process will not be affected and will continue until it finishes. 4.2. Managing deployment processes 4.2.1. Managing DeploymentConfig objects DeploymentConfig objects can be managed from the OpenShift Container Platform web console's Workloads page or using the oc CLI. The following procedures show CLI usage unless otherwise stated. 4.2.1.1. Starting a deployment You can start a rollout to begin the deployment process of your application. Procedure To start a new deployment process from an existing DeploymentConfig object, run the following command: USD oc rollout latest dc/<name> Note If a deployment process is already in progress, the command displays a message and a new replication controller will not be deployed. 4.2.1.2. Viewing a deployment You can view a deployment to get basic information about all the available revisions of your application. Procedure To show details about all recently created replication controllers for the provided DeploymentConfig object, including any currently running deployment process, run the following command: USD oc rollout history dc/<name> To view details specific to a revision, add the --revision flag: USD oc rollout history dc/<name> --revision=1 For more detailed information about a DeploymentConfig object and its latest revision, use the oc describe command: USD oc describe dc <name> 4.2.1.3. Retrying a deployment If the current revision of your DeploymentConfig object failed to deploy, you can restart the deployment process. Procedure To restart a failed deployment process: USD oc rollout retry dc/<name> If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not retried. Note Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller has the same configuration it had when it failed. 4.2.1.4. Rolling back a deployment Rollbacks revert an application back to a revision and can be performed using the REST API, the CLI, or the web console. Procedure To rollback to the last successful deployed revision of your configuration: USD oc rollout undo dc/<name> The DeploymentConfig object's template is reverted to match the deployment revision specified in the undo command, and a new replication controller is started. If no revision is specified with --to-revision , then the last successfully deployed revision is used. Image change triggers on the DeploymentConfig object are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete. To re-enable the image change triggers: USD oc set triggers dc/<name> --auto Note Deployment configs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations. 4.2.1.5. Executing commands inside a container You can add a command to a container, which modifies the container's startup behavior by overruling the image's ENTRYPOINT . This is different from a lifecycle hook, which instead can be run once per deployment at a specified time. Procedure Add the command parameters to the spec field of the DeploymentConfig object. You can also add an args field, which modifies the command (or the ENTRYPOINT if command does not exist). spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>' For example, to execute the java command with the -jar and /opt/app-root/springboots2idemo.jar arguments: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar 4.2.1.6. Viewing deployment logs Procedure To stream the logs of the latest revision for a given DeploymentConfig object: USD oc logs -f dc/<name> If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a pod of your application. You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually: USD oc logs --version=1 dc/<name> 4.2.1.7. Deployment triggers A DeploymentConfig object can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster. Warning If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually. Config change deployment triggers The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the DeploymentConfig object. Note If a config change trigger is defined on a DeploymentConfig object, the first replication controller is automatically created soon after the DeploymentConfig object itself is created and it is not paused. Config change deployment trigger triggers: - type: "ConfigChange" Image change deployment triggers The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed). Image change deployment trigger triggers: - type: "ImageChange" imageChangeParams: automatic: true 1 from: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" namespace: "myproject" containerNames: - "helloworld" 1 If the imageChangeParams.automatic field is set to false , the trigger is disabled. With the above example, when the latest tag value of the origin-ruby-sample image stream changes and the new image value differs from the current image specified in the DeploymentConfig object's helloworld container, a new replication controller is created using the new image for the helloworld container. Note If an image change trigger is defined on a DeploymentConfig object (with a config change trigger and automatic=false , or with automatic=true ) and the image stream tag pointed by the image change trigger does not exist yet, the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the image stream tag. 4.2.1.7.1. Setting deployment triggers Procedure You can set deployment triggers for a DeploymentConfig object using the oc set triggers command. For example, to set a image change trigger, use the following command: USD oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name> 4.2.1.8. Setting deployment resources A deployment is completed by a pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits. Note The minimum memory limit for a deployment is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the recreate, rolling, or custom deployment strategies. Procedure In the following example, each of resources , cpu , memory , and ephemeral-storage is optional: type: "Recreate" resources: limits: cpu: "100m" 1 memory: "256Mi" 2 ephemeral-storage: "1Gi" 3 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). 3 ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : type: "Recreate" resources: requests: 1 cpu: "100m" memory: "256Mi" ephemeral-storage: "1Gi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the deployment process. To set deployment resources, choose one of the above options. Otherwise, deploy pod creation fails, citing a failure to satisfy quota. Additional resources For more information about resource limits and requests, see Understanding managing application memory . 4.2.1.9. Scaling manually In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them. Note Pods can also be auto-scaled using the oc autoscale command. Procedure To manually scale a DeploymentConfig object, use the oc scale command. For example, the following command sets the replicas in the frontend DeploymentConfig object to 3 . USD oc scale dc frontend --replicas=3 The number of replicas eventually propagates to the desired and current state of the deployment configured by the DeploymentConfig object frontend . 4.2.1.10. Accessing private repositories from DeploymentConfig objects You can add a secret to your DeploymentConfig object so that it can access images from a private repository. This procedure shows the OpenShift Container Platform web console method. Procedure Create a new project. From the Workloads page, create a secret that contains credentials for accessing a private image repository. Create a DeploymentConfig object. On the DeploymentConfig object editor page, set the Pull Secret and save your changes. 4.2.1.11. Assigning pods to specific nodes You can use node selectors in conjunction with labeled nodes to control pod placement. Cluster administrators can set the default node selector for a project in order to restrict pod placement to specific nodes. As a developer, you can set a node selector on a Pod configuration to restrict nodes even further. Procedure To add a node selector when creating a pod, edit the Pod configuration, and add the nodeSelector value. This can be added to a single Pod configuration, or in a Pod template: apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd ... Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator. For example, if a project has the type=user-node and region=east labels added to a project by the cluster administrator, and you add the above disktype: ssd label to a pod, the pod is only ever scheduled on nodes that have all three labels. Note Labels can only be set to one value, so setting a node selector of region=west in a Pod configuration that has region=east as the administrator-set default, results in a pod that will never be scheduled. 4.2.1.12. Running a pod with a different service account You can run a pod with a service account other than the default. Procedure Edit the DeploymentConfig object: USD oc edit dc/<deployment_config> Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use: spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account> 4.3. Using deployment strategies A deployment strategy is a way to change or upgrade an application. The aim is to make the change without downtime in a way that the user barely notices the improvements. Because the end user usually accesses the application through a route handled by a router, the deployment strategy can focus on DeploymentConfig object features or routing features. Strategies that focus on the deployment impact all routes that use the application. Strategies that use router features target individual routes. Many deployment strategies are supported through the DeploymentConfig object, and some additional strategies are supported through router features. Deployment strategies are discussed in this section. Choosing a deployment strategy Consider the following when choosing a deployment strategy: Long-running connections must be handled gracefully. Database conversions can be complex and must be done and rolled back along with the application. If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition. You must have the infrastructure to do this. If you have a non-isolated test environment, you can break both new and old versions. A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the DeploymentConfig object retries to run the pod until it times out. The default timeout is 10m , a value set in TimeoutSeconds in dc.spec.strategy.*params . 4.3.1. Rolling strategy A rolling deployment slowly replaces instances of the version of an application with instances of the new version of the application. The rolling strategy is the default deployment strategy used if no strategy is specified on a DeploymentConfig object. A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted. When to use a rolling deployment: When you want to take no downtime during an application update. When your application supports having old code and new code running at the same time. A rolling deployment means you have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility. Example rolling strategy definition strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: "20%" 4 maxUnavailable: "10%" 5 pre: {} 6 post: {} 1 The time to wait between individual pod updates. If unspecified, this value defaults to 1 . 2 The time to wait between polling the deployment status after update. If unspecified, this value defaults to 1 . 3 The time to wait for a scaling event before giving up. Optional; the default is 600 . Here, giving up means automatically rolling back to the complete deployment. 4 maxSurge is optional and defaults to 25% if not specified. See the information below the following procedure. 5 maxUnavailable is optional and defaults to 25% if not specified. See the information below the following procedure. 6 pre and post are both lifecycle hooks. The rolling strategy: Executes any pre lifecycle hook. Scales up the new replication controller based on the surge count. Scales down the old replication controller based on the max unavailable count. Repeats this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero. Executes any post lifecycle hook. Important When scaling down, the rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure. The maxUnavailable parameter is the maximum number of pods that can be unavailable during the update. The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10% ) or an absolute value (e.g., 2 ). The default value for both is 25% . These parameters allow the deployment to be tuned for availability and speed. For example: maxUnavailable*=0 and maxSurge*=20% ensures full capacity is maintained during the update and rapid scale up. maxUnavailable*=10% and maxSurge*=0 performs an update using no extra capacity (an in-place update). maxUnavailable*=10% and maxSurge*=10% scales up and down quickly with some potential for capacity loss. Generally, if you want fast rollouts, use maxSurge . If you have to take into account resource quota and can accept partial unavailability, use maxUnavailable . 4.3.1.1. Canary deployments All rolling deployments in OpenShift Container Platform are canary deployments ; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the DeploymentConfig object will be automatically rolled back. The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy. 4.3.1.2. Creating a rolling deployment Rolling deployments are the default type in OpenShift Container Platform. You can create a rolling deployment using the CLI. Procedure Create an application based on the example deployment images found in Quay.io : USD oc new-app quay.io/openshifttest/deployment-example:latest If you have the router installed, make the application available via a route or use the service IP directly. USD oc expose svc/deployment-example Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image. Scale the DeploymentConfig object up to three replicas: USD oc scale dc/deployment-example --replicas=3 Trigger a new deployment automatically by tagging a new version of the example as the latest tag: USD oc tag deployment-example:v2 deployment-example:latest In your browser, refresh the page until you see the v2 image. When using the CLI, the following command shows how many pods are on version 1 and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1: USD oc describe dc deployment-example During the deployment process, the new replication controller is incrementally scaled up. After the new pods are marked as ready (by passing their readiness check), the deployment process continues. If the pods do not become ready, the process aborts, and the deployment rolls back to its version. 4.3.1.3. Starting a rolling deployment using the Developer perspective Prerequisites Ensure that you are in the Developer perspective of the web console. Ensure that you have created an application using the Add view and see it deployed in the Topology view. Procedure To start a rolling deployment to upgrade an application: In the Topology view of the Developer perspective, click on the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy. In the Actions drop-down menu, select Start Rollout to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one. Figure 4.1. Rolling update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 4.3.2. Recreate strategy The recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process. Example recreate strategy definition strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {} 1 recreateParams are optional. 2 pre , mid , and post are lifecycle hooks. The recreate strategy: Executes any pre lifecycle hook. Scales down the deployment to zero. Executes any mid lifecycle hook. Scales up the new deployment. Executes any post lifecycle hook. Important During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure. When to use a recreate deployment: When you must run migrations or other data transformations before your new code starts. When you do not support having new and old versions of your application code running at the same time. When you want to use a RWO volume, which is not supported being shared between multiple replicas. A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time. 4.3.3. Starting a recreate deployment using the Developer perspective You can switch the deployment strategy from the default rolling update to a recreate update using the Developer perspective in the web console. Prerequisites Ensure that you are in the Developer perspective of the web console. Ensure that you have created an application using the Add view and see it deployed in the Topology view. Procedure To switch to a recreate update strategy and to upgrade an application: In the Actions drop-down menu, select Edit Deployment Config to see the deployment configuration details of the application. In the YAML editor, change the spec.strategy.type to Recreate and click Save . In the Topology view, select the node to see the Overview tab in the side panel. The Update Strategy is now set to Recreate . Use the Actions drop-down menu to select Start Rollout to start an update using the recreate strategy. The recreate strategy first terminates pods for the older version of the application and then spins up pods for the new version. Figure 4.2. Recreate update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 4.3.4. Custom strategy The custom strategy allows you to provide your own deployment behavior. Example custom strategy definition strategy: type: Custom customParams: image: organization/strategy command: [ "command", "arg1" ] environment: - name: ENV_1 value: VALUE_1 In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image's Dockerfile . The optional environment variables provided are added to the execution environment of the strategy process. Additionally, OpenShift Container Platform provides the following environment variables to the deployment process: Environment variable Description OPENSHIFT_DEPLOYMENT_NAME The name of the new deployment, a replication controller. OPENSHIFT_DEPLOYMENT_NAMESPACE The name space of the new deployment. The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user. Alternatively, use the customParams object to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image; in this case, the default OpenShift Container Platform deployer image is used instead: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete This results in following deployment: Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete If the custom deployment strategy process requires access to the OpenShift Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication. 4.3.5. Lifecycle hooks The rolling and recreate strategies support lifecycle hooks , or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy: Example pre lifecycle hook pre: failurePolicy: Abort execNewPod: {} 1 1 execNewPod is a pod-based lifecycle hook. Every hook has a failure policy , which defines the action the strategy should take when a hook failure is encountered: Abort The deployment process will be considered a failure if the hook fails. Retry The hook execution should be retried until it succeeds. Ignore Any hook failure should be ignored and the deployment should proceed. Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the execNewPod field. Pod-based lifecycle hook Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a DeploymentConfig object. The following simplified example deployment uses the rolling strategy. Triggers and some other minor details are omitted for brevity: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ "/usr/bin/command", "arg1", "arg2" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4 1 The helloworld name refers to spec.template.spec.containers[0].name . 2 This command overrides any ENTRYPOINT defined by the openshift/origin-ruby-sample image. 3 env is an optional set of environment variables for the hook container. 4 volumes is an optional set of volume references for the hook container. In this example, the pre hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod has the following properties: The hook command is /usr/bin/command arg1 arg2 . The hook container has the CUSTOM_VAR1=custom_value1 environment variable. The hook failure policy is Abort , meaning the deployment process fails if the hook fails. The hook pod inherits the data volume from the DeploymentConfig object pod. 4.3.5.1. Setting lifecycle hooks You can set lifecycle hooks, or deployment hooks, for a deployment using the CLI. Procedure Use the oc set deployment-hook command to set the type of hook you want: --pre , --mid , or --post . For example, to set a pre-deployment hook: USD oc set deployment-hook dc/frontend \ --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \ --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2 4.4. Using route-based deployment strategies Deployment strategies provide a way for the application to evolve. Some strategies use Deployment objects to make changes that are seen by users of all routes that resolve to the application. Other advanced strategies, such as the ones described in this section, use router features in conjunction with Deployment objects to impact specific routes. The most common route-based strategy is to use a blue-green deployment . The new version (the green version) is brought up for testing and evaluation, while the users still use the stable version (the blue version). When ready, the users are switched to the green version. If a problem arises, you can switch back to the blue version. A common alternative strategy is to use A/B versions that are both active at the same time and some users use one version, and some users use the other version. This can be used for experimenting with user interface changes and other features to get user feedback. It can also be used to verify proper operation in a production context where problems impact a limited number of users. A canary deployment tests the new version but when a problem is detected it quickly falls back to the version. This can be done with both of the above strategies. The route-based deployment strategies do not scale the number of pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled. 4.4.1. Proxy shards and traffic splitting In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard , which forwards or splits the traffic it receives to a separate service or application running elsewhere. In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes. Any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OpenShift Container Platform router with proportional balancing capabilities. 4.4.2. N-1 compatibility Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem. This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user's browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it. For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional. One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment. 4.4.3. Graceful termination OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit. On shutdown, OpenShift Container Platform sends a TERM signal to the processes in the container. Application code, on receiving SIGTERM , stop accepting new connections. This ensures that load balancers route traffic to other active instances. The application code then waits until all open connections are closed, or gracefully terminate individual connections at the opportunity, before exiting. After the graceful termination period expires, a process that has not exited is sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period (default 30 seconds) and can be customized per application as necessary. 4.4.4. Blue-green deployments Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the blue version) to the newer version (the green version). You can use a rolling strategy or switch services in a route. Because many applications depend on persistent data, you must have an application that supports N-1 compatibility , which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer. Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version. 4.4.4.1. Setting up a blue-green deployment Blue-green deployments use two Deployment objects. Both are running, and the one in production depends on the service the route specifies, with each Deployment object exposed to a different service. Note Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications. You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (green) version is live. If necessary, you can roll back to the older (blue) version by switching the service back to the version. Procedure Create two independent application components. Create a copy of the example application running the v1 image under the example-blue service: USD oc new-app openshift/deployment-example:v1 --name=example-blue Create a second copy that uses the v2 image under the example-green service: USD oc new-app openshift/deployment-example:v2 --name=example-green Create a route that points to the old service: USD oc expose svc/example-blue --name=bluegreen-example Browse to the application at bluegreen-example-<project>.<router_domain> to verify you see the v1 image. Edit the route and change the service name to example-green : USD oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-green"}}}' To verify that the route has changed, refresh the browser until you see the v2 image. 4.4.5. A/B deployments The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version. Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the version. As you adjust the request load on each version, the number of pods in each service might have to be scaled as well to provide the expected performance. In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user's reaction to the different versions to inform design decisions. For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together. OpenShift Container Platform supports N-1 compatibility through the web console as well as the CLI. 4.4.5.1. Load balancing for A/B testing The user sets up a route with multiple services. Each service handles a version of the application. Each service is assigned a weight and the portion of requests to each service is the service_weight divided by the sum_of_weights . The weight for each service is distributed to the service's endpoints so that the sum of the endpoint weights is the service weight . The route can have up to four services. The weight for the service can be between 0 and 256 . When the weight is 0 , the service does not participate in load-balancing but continues to serve existing persistent connections. When the service weight is not 0 , each endpoint has a minimum weight of 1 . Because of this, a service with a lot of endpoints can end up with higher weight than intended. In this case, reduce the number of pods to get the expected load balance weight . Procedure To set up the A/B environment: Create the two applications and give them different names. Each creates a Deployment object. The applications are versions of the same program; one is usually the current production version and the other the proposed new version. Create the first application. The following example creates an application called ab-example-a : USD oc new-app openshift/deployment-example --name=ab-example-a Create the second application: USD oc new-app openshift/deployment-example:v2 --name=ab-example-b Both applications are deployed and services are created. Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version. USD oc expose svc/ab-example-a Browse to the application at ab-example-a.<project>.<router_domain> to verify that you see the expected version. When you deploy the route, the router balances the traffic according to the weights specified for the services. At this point, there is a single service with default weight=1 so all requests go to it. Adding the other service as an alternateBackends and adjusting the weights brings the A/B setup to life. This can be done by the oc set route-backends command or by editing the route. Setting the oc set route-backend to 0 means the service does not participate in load-balancing, but continues to serve existing persistent connections. Note Changes to the route just change the portion of traffic to the various services. You might have to scale the deployment to adjust the number of pods to handle the anticipated loads. To edit the route, run: USD oc edit route <route_name> Example output ... metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15 ... 4.4.5.1.1. Managing weights of an existing route using the web console Procedure Navigate to the Networking Routes page. Click the Actions menu to the route you want to edit and select Edit Route . Edit the YAML file. Update the weight to be an integer between 0 and 256 that specifies the relative weight of the target against other target reference objects. The value 0 suppresses requests to this back end. The default is 100 . Run oc explain routes.spec.alternateBackends for more information about the options. Click Save . 4.4.5.1.2. Managing weights of an new route using the web console Navigate to the Networking Routes page. Click Create Route . Enter the route Name . Select the Service . Click Add Alternate Service . Enter a value for Weight and Alternate Service Weight . Enter a number between 0 and 255 that depicts relative weight compared with other targets. The default is 100 . Select the Target Port . Click Create . 4.4.5.1.3. Managing weights using the CLI Procedure To manage the services and corresponding weights load balanced by the route, use the oc set route-backends command: USD oc set route-backends ROUTENAME \ [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options] For example, the following sets ab-example-a as the primary service with weight=198 and ab-example-b as the first alternate service with a weight=2 : USD oc set route-backends ab-example ab-example-a=198 ab-example-b=2 This means 99% of traffic is sent to service ab-example-a and 1% to service ab-example-b . This command does not scale the deployment. You might be required to do so to have enough pods to handle the request load. Run the command with no flags to verify the current configuration: USD oc set route-backends ab-example Example output NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%) To alter the weight of an individual service relative to itself or to the primary service, use the --adjust flag. Specifying a percentage adjusts the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends, their weights are kept proportional to the changed. The following example alters the weight of ab-example-a and ab-example-b services: USD oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10 Alternatively, alter the weight of a service by specifying a percentage: USD oc set route-backends ab-example --adjust ab-example-b=5% By specifying + before the percentage declaration, you can adjust a weighting relative to the current setting. For example: USD oc set route-backends ab-example --adjust ab-example-b=+15% The --equal flag sets the weight of all services to 100 : USD oc set route-backends ab-example --equal The --zero flag sets the weight of all services to 0 . All requests then return with a 503 error. Note Not all routers may support multiple or weighted backends. 4.4.5.1.4. One service, multiple Deployment objects Procedure Create a new application, adding a label ab-example=true that will be common to all shards: USD oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\=shardA USD oc delete svc/ab-example-a The application is deployed and a service is created. This is the first shard. Make the application available via a route, or use the service IP directly: USD oc expose deployment ab-example-a --name=ab-example --selector=ab-example\=true USD oc expose service ab-example Browse to the application at ab-example-<project_name>.<router_domain> to verify you see the v1 image. Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables: USD oc new-app openshift/deployment-example:v2 \ --name=ab-example-b --labels=ab-example=true \ SUBTITLE="shard B" COLOR="red" --as-deployment-config=true USD oc delete svc/ab-example-b At this point, both sets of pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you. To force your browser to one or the other shard: Use the oc scale command to reduce replicas of ab-example-a to 0 . USD oc scale dc/ab-example-a --replicas=0 Refresh your browser to show v2 and shard B (in red). Scale ab-example-a to 1 replica and ab-example-b to 0 : USD oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0 Refresh your browser to show v1 and shard A (in blue). If you trigger a deployment on either shard, only the pods in that shard are affected. You can trigger a deployment by changing the SUBTITLE environment variable in either Deployment object: USD oc edit dc/ab-example-a or USD oc edit dc/ab-example-b | [
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"triggers: - type: \"ConfigChange\"",
"triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd",
"oc edit dc/<deployment_config>",
"spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/applications/deployments |
Chapter 11. Advanced migration options | Chapter 11. Advanced migration options You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance. 11.1. Terminology Table 11.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 11.2. Migrating an application from on-premises to a cloud-based cluster You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters. The crane tunnel-api command establishes such a tunnel by creating a VPN tunnel on the source cluster and then connecting to a VPN server running on the destination cluster. The VPN server is exposed to the client using a load balancer address on the destination cluster. A service created on the destination cluster exposes the source cluster's API to MTC, which is running on the destination cluster. Prerequisites The system that creates the VPN tunnel must have access and be logged in to both clusters. It must be possible to create a load balancer on the destination cluster. Refer to your cloud provider to ensure this is possible. Have names prepared to assign to namespaces, on both the source cluster and the destination cluster, in which to run the VPN tunnel. These namespaces should not be created in advance. For information about namespace rules, see https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names. When connecting multiple firewall-protected source clusters to the cloud cluster, each source cluster requires its own namespace. OpenVPN server is installed on the destination cluster. OpenVPN client is installed on the source cluster. When configuring the source cluster in MTC, the API URL takes the form of https://proxied-cluster.<namespace>.svc.cluster.local:8443 . If you use the API, see Create a MigCluster CR manifest for each remote cluster . If you use the MTC web console, see Migrating your applications using the MTC web console . The MTC web console and Migration Controller must be installed on the target cluster. Procedure Install the crane utility: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.8):/crane ./ Log in remotely to a node on the source cluster and a node on the destination cluster. Obtain the cluster context for both clusters after logging in: USD oc config view Establish a tunnel by entering the following command on the command system: USD crane tunnel-api [--namespace <namespace>] \ --destination-context <destination-cluster> \ --source-context <source-cluster> If you do not specify a namespace, the command uses the default value openvpn . For example: USD crane tunnel-api --namespace my_tunnel \ --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin \ --source-context default/192-168-122-171-nip-io:8443/admin Tip See all available parameters for the crane tunnel-api command by entering crane tunnel-api --help . The command generates TSL/SSL Certificates. This process might take several minutes. A message appears when the process completes. The OpenVPN server starts on the destination cluster and the OpenVPN client starts on the source cluster. After a few minutes, the load balancer resolves on the source node. Tip You can view the log for the OpenVPN pods to check the status of this process by entering the following commands with root privileges: # oc get po -n <namespace> Example output NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s # oc logs -f -n <namespace> <pod_name> -c openvpn When the address of the load balancer is resolved, the message Initialization Sequence Completed appears at the end of the log. On the OpenVPN server, which is on a destination control node, verify that the openvpn service and the proxied-cluster service are running: USD oc get service -n <namespace> On the source node, get the service account (SA) token for the migration controller: # oc sa get-token -n openshift-migration migration-controller Open the MTC web console and add the source cluster, using the following values: Cluster name : The source cluster name. URL : proxied-cluster.<namespace>.svc.cluster.local:8443 . If you did not define a value for <namespace> , use openvpn . Service account token : The token of the migration controller service account. Exposed route host to image registry : proxied-cluster.<namespace>.svc.cluster.local:5000 . If you did not define a value for <namespace> , use openvpn . After MTC has successfully validated the connection, you can proceed to create and run a migration plan. The namespace for the source cluster should appear in the list of namespaces. Additional resources For information about creating a MigCluster CR manifest for each remote cluster, see Migrating an application by using the MTC API . For information about adding a cluster using the web console, see Migrating your applications by using the MTC web console 11.3. Migrating applications by using the command line You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration. 11.3.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Internal images If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster. You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.17 cluster. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 3 cluster: 8443 (API server) 443 (routes) 53 (DNS) You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 11.3.2. Creating a registry route for direct image migration For direct image migration, you must create a route to the exposed OpenShift image registry on all remote clusters. Prerequisites The OpenShift image registry must be exposed to external traffic on all remote clusters. The OpenShift Container Platform 4 registry is exposed by default. The OpenShift Container Platform 3 registry must be exposed manually . Procedure To create a route to an OpenShift Container Platform 3 registry, run the following command: USD oc create route passthrough --service=docker-registry -n default To create a route to an OpenShift Container Platform 4 registry, run the following command: USD oc create route passthrough --service=image-registry -n openshift-image-registry 11.3.3. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.17, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 11.3.3.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 11.3.3.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 11.3.3.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 11.3.3.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 11.3.3.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 11.3.3.2.1. NetworkPolicy configuration 11.3.3.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 11.3.3.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 11.3.3.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 11.3.3.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 11.3.3.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 11.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 11.3.3.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration 11.3.4. Migrating an application by using the MTC API You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API. Procedure Create a MigCluster CR manifest for the host cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF Create a Secret object manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF 1 Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. You can obtain the token by running the following command: USD oc sa get-token migration-controller -n openshift-migration | base64 -w 0 Create a MigCluster CR manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF 1 Specify the Cluster CR of the remote cluster. 2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false . CA certificates are not required or checked if true . 4 Specify the Secret object of the remote cluster. 5 Specify the URL of the remote cluster. Verify that all clusters are in a Ready state: USD oc describe MigCluster <cluster> Create a Secret object manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF 1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key: USD echo -n "<key>" | base64 -w 0 1 1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a MigStorage CR manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF 1 Specify the bucket name. 2 Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the MigStorage CR is in a Ready state: USD oc describe migstorage <migstorage> Create a MigPlan CR manifest: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF 1 Direct image migration is enabled if false . 2 Direct volume migration is enabled if false . 3 Specify the name of the MigStorage CR instance. 4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify a destination namespace if it is different from the source namespace. 6 Specify the name of the source cluster MigCluster instance. Verify that the MigPlan instance is in a Ready state: USD oc describe migplan <migplan> -n openshift-migration Create a MigMigration CR manifest to start the migration defined in the MigPlan instance: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF 1 Specify the MigPlan CR name. 2 The pods on the source cluster are stopped before migration if true . 3 A stage migration, which copies most of the data without stopping the application, is performed if true . 4 A completed migration is rolled back if true . Verify the migration by watching the MigMigration CR progress: USD oc watch migmigration <migmigration> -n openshift-migration The output resembles the following: Example output Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration ... Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47 11.3.5. State migration You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application's state. You migrate specified PVCs by excluding other PVCs from the migration plan. You can map the PVCs to ensure that the source and the target PVCs are synchronized. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved, and the application pods continue to run on the source cluster. State migration is specifically designed to be used in conjunction with external CD mechanisms, such as OpenShift Gitops. You can migrate application manifests using GitOps while migrating the state using MTC. If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC. You can perform a state migration between clusters or within the same cluster. Important State migration migrates only the components that constitute an application's state. If you want to migrate an entire namespace, use stage or cutover migration. Prerequisites The state of the application on the source cluster is persisted in PersistentVolumes provisioned through PersistentVolumeClaims . The manifests of the application are available in a central repository that is accessible from both the source and the target clusters. Procedure Migrate persistent volume data from the source to the target cluster. You can perform this step as many times as needed. The source application continues running. Quiesce the source application. You can do this by setting the replicas of workload resources to 0 , either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application. Clone application manifests to the target cluster. You can use Argo CD to clone the application manifests to the target cluster. Migrate the remaining volume data from the source to the target cluster. Migrate any new data created by the application during the state migration process by performing a final data migration. If the cloned application is in a quiesced state, unquiesce it. Switch the DNS record to the target cluster to re-direct user traffic to the migrated application. Note MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications. MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically. Additional resources See Excluding PVCs from migration to select PVCs for state migration. See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster. See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application's state. 11.4. Migration hooks You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration. A migration hook runs on a source or a target cluster at one of the following migration steps: PreBackup : Before resources are backed up on the source cluster. PostBackup : After resources are backed up on the source cluster. PreRestore : Before resources are restored on the target cluster. PostRestore : After resources are restored on the target cluster. You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container. Ansible playbook The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed. The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.8 . This image is based on the Ansible Runner image and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. Custom hook container You can use a custom hook container instead of the default Ansible image. 11.4.1. Writing an Ansible playbook for a migration hook You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks parameters in the MigPlan custom resource (CR) manifest. The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster. 11.4.1.1. Ansible modules You can use the Ansible shell module to run oc commands. Example shell module - hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces You can use kubernetes.core modules, such as k8s_info , to interact with Kubernetes resources. Example k8s_facts module - hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: "{{ lookup( 'env', 'HOSTNAME') }}" register: pods - name: Print pod name debug: msg: "{{ pods.resources[0].metadata.name }}" You can use the fail module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container. Example fail module - hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: "fail" fail: msg: "Cause a failure" when: do_fail 11.4.1.2. Environment variables The MigPlan CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup plugin. Example environment variables - hosts: localhost gather_facts: false tasks: - set_fact: namespaces: "{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}" - debug: msg: "{{ item }}" with_items: "{{ namespaces }}" - debug: msg: "{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}" 11.5. Migration plan options You can exclude, edit, and map components in the MigPlan custom resource (CR). 11.5.1. Excluding resources You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool. By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time. Procedure Edit the MigrationController custom resource manifest: USD oc edit migrationcontroller <migration_controller> -n openshift-migration Update the spec section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add the additional_excluded_resources parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2 ... 1 Add disable_image_migration: true to exclude image streams from the migration. imagestreams is added to the excluded_resources list in main.yml when the MigrationController pod restarts. 2 Add disable_pv_migration: true to exclude PVs from the migration plan. persistentvolumes and persistentvolumeclaims are added to the excluded_resources list in main.yml when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. 3 You can add OpenShift Container Platform resources that you want to exclude to the additional_excluded_resources list. Wait two minutes for the MigrationController pod to restart so that the changes are applied. Verify that the resource is excluded: USD oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1 The output contains the excluded resources: Example output name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims 11.5.2. Mapping namespaces If you map namespaces in the MigPlan custom resource (CR), you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges of the namespaces are copied during migration. Two source namespaces mapped to the same destination namespace spec: namespaces: - namespace_2 - namespace_1:namespace_2 If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name. Incorrect namespace mapping spec: namespaces: - namespace_1:namespace_1 Correct namespace reference spec: namespaces: - namespace_1 11.5.3. Excluding persistent volume claims You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the spec.persistentVolumes.pvc.selection.action parameter of the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Add the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan CR and set it to skip : apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: ... selection: action: skip 11.5.4. Mapping persistent volume claims You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the MigPlan CR by mapping the PVCs. This mapping ensures that the destination PVCs of migrated applications are synchronized with the source PVCs. You map PVCs by updating the spec.persistentVolumes.pvc.name parameter in the MigPlan custom resource (CR) after the PVs have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Update the spec.persistentVolumes.pvc.name parameter in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1 1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration. 11.5.5. Editing persistent volume attributes After you create a MigPlan custom resource (CR), the MigrationController CR discovers the persistent volumes (PVs). The spec.persistentVolumes block and the status.destStorageClasses block are added to the MigPlan CR. You can edit the values in the spec.persistentVolumes.selection block. If you change values outside the spec.persistentVolumes.selection block, the values are overwritten when the MigPlan CR is reconciled by the MigrationController CR. Note The default value for the spec.persistentVolumes.selection.storageClass parameter is determined by the following logic: If the source cluster PV is Gluster or NFS, the default is either cephfs , for accessMode: ReadWriteMany , or cephrbd , for accessMode: ReadWriteOnce . If the PV is neither Gluster nor NFS or if cephfs or cephrbd are not available, the default is a storage class for the same provisioner. If a storage class for the same provisioner is not available, the default is the default storage class of the destination cluster. You can change the storageClass value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If the storageClass value is empty, the PV will have no storage class after migration. This option is appropriate if, for example, you want to move the PV to an NFS volume on the destination cluster. Prerequisites MigPlan CR is in a Ready state. Procedure Edit the spec.persistentVolumes.selection values in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs 1 Allowed values are move , copy , and skip . If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value is copy . 2 Allowed values are snapshot and filesystem . Default value is filesystem . 3 The verify parameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it to false . 4 You can change the default value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If no value is specified, the PV will have no storage class after migration. 5 Allowed values are ReadWriteOnce and ReadWriteMany . If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in the MigPlan CR. You cannot edit it by using the MTC web console. Additional resources For details about the move and copy actions, see MTC workflow . For details about the skip action, see Excluding PVCs from migration . For details about the file system and snapshot copy methods, see About data copy methods . 11.5.6. Performing a state migration of Kubernetes objects by using the MTC API After you migrate all the PV data, you can use the Migration Toolkit for Containers (MTC) API to perform a one-time state migration of Kubernetes objects that constitute an application. You do this by configuring MigPlan custom resource (CR) fields to provide a list of Kubernetes resources with an additional label selector to further filter those resources, and then performing a migration by creating a MigMigration CR. The MigPlan resource is closed after the migration. Note Selecting Kubernetes resources is an API-only feature. You must update the MigPlan CR and create a MigMigration CR for it by using the CLI. The MTC web console does not support migrating Kubernetes objects. Note After migration, the closed parameter of the MigPlan CR is set to true . You cannot create another MigMigration CR for this MigPlan CR. You add Kubernetes objects to the MigPlan CR by using one of the following options: Adding the Kubernetes objects to the includedResources section. When the includedResources field is specified in the MigPlan CR, the plan takes a list of group-kind as input. Only resources present in the list are included in the migration. Adding the optional labelSelector parameter to filter the includedResources in the MigPlan . When this field is specified, only resources matching the label selector are included in the migration. For example, you can filter a list of Secret and ConfigMap resources by using the label app: frontend as a filter. Procedure Update the MigPlan CR to include Kubernetes resources and, optionally, to filter the included resources by adding the labelSelector parameter: To update the MigPlan CR to include Kubernetes resources: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" 1 Specify the Kubernetes object, for example, Secret or ConfigMap . Optional: To filter the included resources by adding the labelSelector parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" ... labelSelector: matchLabels: <label> 2 1 Specify the Kubernetes object, for example, Secret or ConfigMap . 2 Specify the label of the resources to migrate, for example, app: frontend . Create a MigMigration CR to migrate the selected Kubernetes resources. Verify that the correct MigPlan is referenced in migPlanRef : apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false 11.6. Migration controller options You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the MigrationController custom resource (CR) for large migrations and improved performance. 11.6.1. Increasing limits for large migrations You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC). Important You must test these changes before you perform a migration in a production environment. Procedure Edit the MigrationController custom resource (CR) manifest: USD oc edit migrationcontroller -n openshift-migration Update the following parameters: ... mig_controller_limits_cpu: "1" 1 mig_controller_limits_memory: "10Gi" 2 ... mig_controller_requests_cpu: "100m" 3 mig_controller_requests_memory: "350Mi" 4 ... mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7 ... 1 Specifies the number of CPUs available to the MigrationController CR. 2 Specifies the amount of memory available to the MigrationController CR. 3 Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3). 4 Specifies the amount of memory available for MigrationController CR requests. 5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes. If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan. 11.6.2. Enabling persistent volume resizing for direct volume migration You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster. When the disk usage of a PV reaches a configured level, the MigrationController custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster. A pv_resizing_threshold parameter determines when PV resizing is used. The default threshold is 3% . This means that PV resizing occurs when the disk usage of a PV is more than 97% . You can increase this threshold so that PV resizing occurs at a lower disk usage level. PVC capacity is calculated according to the following criteria: If the requested storage capacity ( spec.resources.requests.storage ) of the PVC is not equal to its actual provisioned capacity ( status.capacity.storage ), the greater value is used. If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used. Prerequisites The PVCs must be attached to one or more running pods so that the MigrationController CR can execute commands. Procedure Log in to the host cluster. Enable PV resizing by patching the MigrationController CR: USD oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ 1 --type='merge' -n openshift-migration 1 Set the value to false to disable PV resizing. Optional: Update the pv_resizing_threshold parameter to increase the threshold: USD oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ 1 --type='merge' -n openshift-migration 1 The default value is 3 . When the threshold is exceeded, the following status message is displayed in the MigPlan CR status: status: conditions: ... - category: Warn durable: true lastTransitionTime: "2021-06-17T08:57:01Z" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: "False" type: PvCapacityAdjustmentRequired Note For AWS gp2 storage, this message does not appear unless the pv_resizing_threshold is 42% or greater because of the way gp2 calculates volume usage and size. ( BZ#1973148 ) 11.6.3. Enabling cached Kubernetes clients You can enable cached Kubernetes clients in the MigrationController custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency. Note Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. Cached clients require extra memory because the MigrationController CR caches all API resources that are required for interacting with MigCluster CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates. You can increase the memory limits and requests of the MigrationController CR if OOMKilled errors occur after you enable cached clients. Procedure Enable cached clients by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]' Optional: Increase the MigrationController CR memory limits by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]' Optional: Increase the MigrationController CR memory requests by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]' | [
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.8):/crane ./",
"oc config view",
"crane tunnel-api [--namespace <namespace>] --destination-context <destination-cluster> --source-context <source-cluster>",
"crane tunnel-api --namespace my_tunnel --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin --source-context default/192-168-122-171-nip-io:8443/admin",
"oc get po -n <namespace>",
"NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s",
"oc logs -f -n <namespace> <pod_name> -c openvpn",
"oc get service -n <namespace>",
"oc sa get-token -n openshift-migration migration-controller",
"oc create route passthrough --service=docker-registry -n default",
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe MigCluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migrating_from_version_3_to_4/advanced-migration-options-3-4 |
Appendix A. List of tickets by component | Appendix A. List of tickets by component Bugzilla and JIRA tickets are listed in this document for reference. The links lead to the release notes in this document that describe the tickets. Component Tickets 389-ds-base Bugzilla:2136610 , Bugzilla:2096795 , Bugzilla:2142639 , Bugzilla:2130276 , Bugzilla:1817505 NetworkManager Bugzilla:2089707 , Bugzilla:2134907 , Bugzilla:2132754 SLOF Bugzilla:1910848 accel-config Bugzilla:1843266 anaconda Bugzilla:1913035 , Bugzilla:2014103 , Bugzilla:1991516 , Bugzilla:2094977 , Bugzilla:2050140 , Bugzilla:1914955 , Bugzilla:1929105 , Bugzilla:2126506 ansible-collection-microsoft-sql Bugzilla:2144820 , Bugzilla:2144821 , Bugzilla:2144852 , Bugzilla:2153428 , Bugzilla:2163696 , Bugzilla:2153427 ansible-freeipa Bugzilla:2127912 apr Bugzilla:1819607 authselect Bugzilla:1892761 bacula Bugzilla:2089399 brltty Bugzilla:2008197 certmonger Bugzilla:2150025 clevis Bugzilla:2159440 , Bugzilla:2159736 cloud-init Bugzilla:1750862 cockpit Bugzilla:2212371 , Bugzilla:1666722 cockpit-appstream Bugzilla:2030836 cockpit-machines Bugzilla:2173584 conntrack-tools Bugzilla:2126736 coreutils Bugzilla:2030661 corosync-qdevice Bugzilla:1784200 crash Bugzilla:1906482 crash-ptdump-command Bugzilla:1838927 createrepo_c Bugzilla:1973588 crypto-policies Bugzilla:1921646 , Bugzilla:2071981 , Bugzilla:1919155 , Bugzilla:1660839 device-mapper-multipath Bugzilla:2022359 , Bugzilla:2011699 distribution Bugzilla:1657927 dnf Bugzilla:2054235 , Bugzilla:2047251 , Bugzilla:2016070 , Bugzilla:1986657 dnf-plugins-core Bugzilla:2139324 edk2 Bugzilla:1741615 , Bugzilla:1935497 fapolicyd Bugzilla:2165645 , Bugzilla:2054741 fence-agents Bugzilla:1775847 firewalld Bugzilla:1871860 gcc Bugzilla:2110582 gdb Bugzilla:1853140 git Bugzilla:2139378 git-lfs Bugzilla:2139382 glassfish-jaxb Bugzilla:2055539 glibc Bugzilla:1871383 , Bugzilla:1159809 gnome-session Bugzilla:2070976 gnome-shell-extensions Bugzilla:2033572 , Bugzilla:2138109 , Bugzilla:1717947 gnome-software Bugzilla:1668760 gnutls Bugzilla:1628553 golang Bugzilla:2174430 , Bugzilla:2132767 , Bugzilla:2132694 , Bugzilla:2132419 grub2 Bugzilla:1583445 grubby Bugzilla:1900829 initscripts Bugzilla:1875485 ipa Bugzilla:2075452 , Bugzilla:1924707 , Bugzilla:2120572 , Bugzilla:2122919 , Bugzilla:1664719 , Bugzilla:1664718 , Bugzilla:2101770 ipmitool Bugzilla:1873614 kernel Bugzilla:2107595 , Bugzilla:1660908 , Bugzilla:1664379 , Bugzilla:2136107 , Bugzilla:2127136 , Bugzilla:2143849 , Bugzilla:1905243 , Bugzilla:2009705 , Bugzilla:2103946 , Bugzilla:2087262 , Bugzilla:2151854 , Bugzilla:2134931 , Bugzilla:2069047 , Bugzilla:2135417 , Bugzilla:1868526 , Bugzilla:1694705 , Bugzilla:1730502 , Bugzilla:1609288 , Bugzilla:1602962 , Bugzilla:1865745 , Bugzilla:1906870 , Bugzilla:1924016 , Bugzilla:1942888 , Bugzilla:1812577 , Bugzilla:1910358 , Bugzilla:1930576 , Bugzilla:1793389 , Bugzilla:1654962 , Bugzilla:1940674 , Bugzilla:2169382 , Bugzilla:1920086 , Bugzilla:1971506 , Bugzilla:2059262 , Bugzilla:2050411 , Bugzilla:2106341 , Bugzilla:2127028 , Bugzilla:2130159 , Bugzilla:2189645 , Bugzilla:1605216 , Bugzilla:1519039 , Bugzilla:1627455 , Bugzilla:1501618 , Bugzilla:1633143 , Bugzilla:1814836 , Bugzilla:1839311 , Bugzilla:1570255 , Bugzilla:1696451 , Bugzilla:1348508 , Bugzilla:1837187 , Bugzilla:1660337 , Bugzilla:2041686 , Bugzilla:1836977 , Bugzilla:1878207 , Bugzilla:1665295 , Bugzilla:1871863 , Bugzilla:1569610 , Bugzilla:1794513 kexec-tools Bugzilla:2111855 kmod Bugzilla:2103605 kmod-kvdo Bugzilla:2119819 , Bugzilla:2109047 krb5 Bugzilla:2125182 , Bugzilla:2125318 , Bugzilla:1877991 libdnf Bugzilla:2124483 libffi Bugzilla:2014228 libgnome-keyring Bugzilla:1607766 libguestfs Bugzilla:1554735 libreswan Bugzilla:2128672 , Bugzilla:2176248 , Bugzilla:1989050 libselinux-python-2.8-module Bugzilla:1666328 libsoup Bugzilla:1938011 libvirt Bugzilla:1664592 , Bugzilla:1332758 , Bugzilla:1528684 llvm-toolset Bugzilla:2118568 lvm2 Bugzilla:1496229 , Bugzilla:1768536 mariadb Bugzilla:1942330 mesa Bugzilla:1886147 mod_security Bugzilla:2143207 nfs-utils Bugzilla:2081114 , Bugzilla:1592011 nginx Bugzilla:2112345 nispor Bugzilla:2153166 nodejs Bugzilla:2178087 nss Bugzilla:1817533 , Bugzilla:1645153 nss_nis Bugzilla:1803161 openblas Bugzilla:2115722 opencryptoki Bugzilla:2110315 opencv Bugzilla:1886310 openmpi Bugzilla:1866402 opensc Bugzilla:2176973 , Bugzilla:1947025 , Bugzilla:2097048 openscap Bugzilla:2159290 , Bugzilla:2161499 openssh Bugzilla:2044354 openssl Bugzilla:1810911 oscap-anaconda-addon Bugzilla:2075508 , Bugzilla:1843932 , Bugzilla:1665082 , Bugzilla:2165948 pacemaker Bugzilla:2133497 , Bugzilla:2121852 , Bugzilla:2122806 pam Bugzilla:2068461 pcs Bugzilla:2132582 , Bugzilla:1816852 , Bugzilla:2112263 , Bugzilla:2112267 , Bugzilla:1918527 , Bugzilla:1619620 , Bugzilla:1851335 pki-core Bugzilla:1729215 , Bugzilla:2134093 , Bugzilla:1628987 podman Jira:RHELPLAN-136601 , Jira:RHELPLAN-136608 , Bugzilla:2119200 , Jira:RHELPLAN-136610 postfix Bugzilla:1711885 postgresql Bugzilla:2128241 powertop Bugzilla:2040070 pykickstart Bugzilla:1637872 python3.11 Bugzilla:2137139 python3.11-lxml Bugzilla:2157673 python36-3.6-module Bugzilla:2165702 qemu-kvm Bugzilla:2117149 , Bugzilla:2020133 , Bugzilla:1740002 , Bugzilla:1719687 , Bugzilla:1966475 , Bugzilla:1792683 , Bugzilla:2177957 , Bugzilla:1651994 rear Bugzilla:2130206 , Bugzilla:2172605 , Bugzilla:2131946 , Bugzilla:1925531 , Bugzilla:2083301 redhat-support-tool Bugzilla:2064575 , Bugzilla:1802026 restore Bugzilla:1997366 rhel-system-roles Bugzilla:2119600 , Bugzilla:2130019 , Bugzilla:2143814 , Bugzilla:2079009 , Bugzilla:2130332 , Bugzilla:2130345 , Bugzilla:2133532 , Bugzilla:2133931 , Bugzilla:2134201 , Bugzilla:2133856 , Bugzilla:2143458 , Bugzilla:2137667 , Bugzilla:2143385 , Bugzilla:2144876 , Bugzilla:2144877 , Bugzilla:2130362 , Bugzilla:2129620 , Bugzilla:2165176 , Bugzilla:2149683 , Bugzilla:2126960 , Bugzilla:2127497 , Bugzilla:2153081 , Bugzilla:2167941 , Bugzilla:2153080 , Bugzilla:2168733 , Bugzilla:2162782 , Bugzilla:2123859 , Bugzilla:2186908 , Bugzilla:2021685 , Bugzilla:2006081 rpm Bugzilla:2129345 , Bugzilla:2110787 , Bugzilla:1688849 rsync Bugzilla:2139118 rsyslog Bugzilla:2124934 , Bugzilla:2070496 , Bugzilla:2157658 , Bugzilla:1679512 , Jira:RHELPLAN-10431 rt-tests Bugzilla:2122374 rteval Bugzilla:2082260 rtla Bugzilla:2075203 rust-toolset Bugzilla:2123899 s390utils Bugzilla:2043833 samba Bugzilla:2132051 , Bugzilla:2009213 , Jira:RHELPLAN-13195 , Jira:RHELDOCS-16612 scap-security-guide Bugzilla:2072444 , Bugzilla:2152658 , Bugzilla:2156192 , Bugzilla:2158404 , Bugzilla:2119356 , Bugzilla:2122322 , Bugzilla:2115343 , Bugzilla:2152208 , Bugzilla:2099394 , Bugzilla:2151553 , Bugzilla:2162803 , Bugzilla:2028428 , Bugzilla:2118758 , Bugzilla:2167373 selinux-policy Bugzilla:1972230 , Bugzilla:2088441 , Bugzilla:2154242 , Bugzilla:2134125 , Bugzilla:2090711 , Bugzilla:2101341 , Bugzilla:2121709 , Bugzilla:2122838 , Bugzilla:2124388 , Bugzilla:2125008 , Bugzilla:2143696 , Bugzilla:2148561 , Bugzilla:1461914 sos Bugzilla:2164987 , Bugzilla:2134906 , Bugzilla:2011413 spice Bugzilla:1849563 sssd Bugzilla:2144519 , Bugzilla:2087247 , Bugzilla:2065692 , Bugzilla:2056483 , Bugzilla:1947671 subscription-manager Bugzilla:2170082 swig Bugzilla:2139076 synce4l Bugzilla:2019751 tang Bugzilla:2188743 texlive Bugzilla:2150727 tomcat Bugzilla:2160455 tuna Bugzilla:2121518 tuned Bugzilla:2133814 , Bugzilla:2113900 tzdata Bugzilla:2154109 udica Bugzilla:1763210 usbguard Bugzilla:2159409 , Bugzilla:2159411 , Bugzilla:2159413 vdo Bugzilla:1949163 virt-manager Bugzilla:2026985 wayland Bugzilla:1673073 weldr-client Bugzilla:2033192 wsmancli Bugzilla:2105316 xdp-tools Bugzilla:2160069 xorg-x11-server Bugzilla:1698565 other Bugzilla:2177769 , Jira:RHELPLAN-139125 , Jira:RHELPLAN-137505 , Jira:RHELPLAN-139430 , Jira:RHELPLAN-137416 , Jira:RHELPLAN-137411 , Jira:RHELPLAN-137406 , Jira:RHELPLAN-137403 , Jira:RHELPLAN-139448 , Jira:RHELPLAN-151481 , Jira:RHELPLAN-150266 , Jira:RHELPLAN-151121 , Jira:RHELPLAN-149091 , Jira:RHELPLAN-139424 , Jira:RHELPLAN-136489 , Bugzilla:2183445 , Jira:RHELPLAN-59528 , Jira:RHELPLAN-148303 , Bugzilla:2025814 , Bugzilla:2077770 , Bugzilla:1777138 , Bugzilla:1640697 , Bugzilla:1697896 , Bugzilla:1961722 , Bugzilla:1659609 , Bugzilla:1687900 , Bugzilla:1757877 , Bugzilla:1741436 , Jira:RHELPLAN-27987 , Jira:RHELPLAN-34199 , Jira:RHELPLAN-57914 , Jira:RHELPLAN-96940 , Bugzilla:1974622 , Bugzilla:2028361 , Bugzilla:2041997 , Bugzilla:2035158 , Jira:RHELPLAN-109613 , Bugzilla:2126777 , Bugzilla:1690207 , Bugzilla:1559616 , Bugzilla:1889737 , Bugzilla:1906489 , Bugzilla:1769727 , Jira:RHELPLAN-27394 , Jira:RHELPLAN-27737 , Jira:RHELPLAN-148394 , Bugzilla:1642765 , Bugzilla:1646541 , Bugzilla:1647725 , Bugzilla:1932222 , Bugzilla:1686057 , Bugzilla:1748980 , Jira:RHELPLAN-71200 , Jira:RHELPLAN-45858 , Bugzilla:1871025 , Bugzilla:1871953 , Bugzilla:1874892 , Bugzilla:1916296 , Jira:RHELPLAN-100400 , Bugzilla:1926114 , Bugzilla:1904251 , Bugzilla:2011208 , Jira:RHELPLAN-59825 , Bugzilla:1920624 , Jira:RHELPLAN-70700 , Bugzilla:1929173 , Jira:RHELPLAN-85066 , Jira:RHELPLAN-98983 , Bugzilla:2009113 , Bugzilla:1958250 , Bugzilla:2038929 , Bugzilla:2006665 , Bugzilla:2029338 , Bugzilla:2061288 , Bugzilla:2060759 , Bugzilla:2055826 , Bugzilla:2059626 , Jira:RHELPLAN-133171 , Bugzilla:2142499 , Jira:RHELPLAN-145958 , Jira:RHELPLAN-146398 , Jira:RHELPLAN-153267 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.8_release_notes/list_of_tickets_by_component |
5.3. Requesting and Receiving Certificates | 5.3. Requesting and Receiving Certificates As explained in Section 5.1, "About Enrolling and Renewing Certificates" , once CSRs are generated, they need to be submitted to the CA for issuance. Some of the methods discussed in Section 5.2, "Creating Certificate Signing Requests" submit CSRs to the CA directly, while some would require submission of the CSRs in a separate step, which could either be carried out by the user or pre-signed by an agent. In this section, we are going to discuss the separate submission steps supported by the RHCS CA. Section 5.3.1, "Requesting and Receiving a Certificate through the End-Entities Page" Section 5.5, "Submitting Certificate requests Using CMC" 5.3.1. Requesting and Receiving a Certificate through the End-Entities Page At the CA End Entity portal (i.e. https:// host.domain : port# /ca/ee/ca), end entities can use the HTML enrollment forms presented at each applicable enrollment profile under the Enrollment/Renewal tab to submit their certificate requests (CSRs, see Section 5.2, "Creating Certificate Signing Requests" for how to generate CSRs). This section assumes that you have the CSR in Base64 encoded format, including the marker lines -----BEGIN NEW CERTIFICATE REQUEST----- and -----END NEW CERTIFICATE REQUEST----- . Many of the default enrollment profiles provide a Certificate Request text box where one could paste in the Base64 encoded CSR, along with a Certificate Request Type selection drop down list. In the certificate enrollment form, enter the required information. The standard requirements are as follows: Certificate Request Type . This is either PKCS#10 or CRMF. Certificate requests created through the subsystem administrative console are PKCS #10; those created through the certutil tool and other utilities are usually PKCS #10. Certificate Request . Paste the base-64 encoded blob, including the -----BEGIN NEW CERTIFICATE REQUEST----- and -----END NEW CERTIFICATE REQUEST----- marker lines. Requester Name . This is the common name of the person requesting the certificate. Requester Email . This is the email address of the requester. The agent or CA system will use this address to contact the requester when the certificate is issued. For example, [email protected] . Requester Phone . This is the contact phone number of the requester. The submitted request is queued for agent approval. An agent needs to process and approve the certificate request. Note Some enrollment profiles may allow automatic approval such as by using the LDAP uid/pwd authentication method offered by Red Hat Certificate System. Enrollments through those profiles would not require manual agent approval in the section. See Chapter 10, Authentication for Enrolling Certificates for supported approval methods. In case of manual approval, once the certificate is approved and generated, you can retrieve the certificate. Open the Certificate Manager end-entities page, for example: Click the Retrieval tab. Fill in the request ID number that was created when the certificate request was submitted, and click Submit . The page shows the status of the certificate request. If the status is complete , then there is a link to the certificate. Click the Issued certificate link. The new certificate information is shown in pretty-print format, in base-64 encoded format, and in PKCS #7 format. The following actions can be taken through this page: To install this certificate on a server or other application, scroll down to the Installing This Certificate in a Server section, which contains the base-64 encoded certificate. Copy the base-64 encoded certificate, including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- marker lines, to a text file. Save the text file, and use it to store a copy of the certificate in the security module of the entity where the private key resides. See Section 15.3.2.1, "Creating Users" . | [
"http s ://server.example.com: 8443/ca/ee/ca"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/requesting-certificates |
Chapter 15. KVM Migration | Chapter 15. KVM Migration This chapter covers the migration guest virtual machines from one host physical machine that runs the KVM hypervisor to another. Migrating guests is possible because virtual machines run in a virtualized environment instead of directly on the hardware. 15.1. Migration Definition and Benefits Migration works by sending the state of the guest virtual machine's memory and any virtualized devices to a destination host physical machine. It is recommended to use shared, networked storage to store the guest's images to be migrated. It is also recommended to use libvirt-managed storage pools for shared storage when migrating virtual machines. Migrations can be performed both with live (running) and non-live (shut-down) guests. In a live migration , the guest virtual machine continues to run on the source host machine, while the guest's memory pages are transferred to the destination host machine. During migration, KVM monitors the source for any changes in pages it has already transferred, and begins to transfer these changes when all of the initial pages have been transferred. KVM also estimates transfer speed during migration, so when the remaining amount of data to transfer reaches a certain configurable period of time (10ms by default), KVM suspends the original guest virtual machine, transfers the remaining data, and resumes the same guest virtual machine on the destination host physical machine. In contrast, a non-live migration (offline migration) suspends the guest virtual machine and then copies the guest's memory to the destination host machine. The guest is then resumed on the destination host machine and the memory the guest used on the source host machine is freed. The time it takes to complete such a migration only depends on network bandwidth and latency. If the network is experiencing heavy use or low bandwidth, the migration will take much longer. Note that if the original guest virtual machine modifies pages faster than KVM can transfer them to the destination host physical machine, offline migration must be used, as live migration would never complete. Migration is useful for: Load balancing Guest virtual machines can be moved to host physical machines with lower usage if their host machine becomes overloaded, or if another host machine is under-utilized. Hardware independence When you need to upgrade, add, or remove hardware devices on the host physical machine, you can safely relocate guest virtual machines to other host physical machines. This means that guest virtual machines do not experience any downtime for hardware improvements. Energy saving Virtual machines can be redistributed to other host physical machines, and the unloaded host systems can thus be powered off to save energy and cut costs in low usage periods. Geographic migration Virtual machines can be moved to another location for lower latency or when required by other reasons. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-KVM_live_migration |
Chapter 64. CMIS Component | Chapter 64. CMIS Component Available as of Camel version 2.11 The cmis component uses the Apache Chemistry client API and allows you to add/read nodes to/from a CMIS compliant content repositories. 64.1. URI Format cmis://cmisServerUrl[?options] You can append query options to the URI in the following format, ?options=value&option2=value&... 64.2. CMIS Options The CMIS component supports 2 options, which are listed below. Name Description Default Type sessionFacadeFactory (common) To use a custom CMISSessionFacadeFactory to create the CMISSessionFacade instances CMISSessionFacade Factory resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The CMIS endpoint is configured using URI syntax: with the following path and query parameters: 64.2.1. Path Parameters (1 parameters): Name Description Default Type cmsUrl Required URL to the cmis repository String 64.2.2. Query Parameters (13 parameters): Name Description Default Type pageSize (common) Number of nodes to retrieve per page 100 int readContent (common) If set to true, the content of document node will be retrieved in addition to the properties false boolean readCount (common) Max number of nodes to read int repositoryId (common) The Id of the repository to use. If not specified the first available repository is used String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean query (consumer) The cmis query to execute against the repository. If not specified, the consumer will retrieve every node from the content repository by iterating the content tree recursively String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern queryMode (producer) If true, will execute the cmis query from the message body and return result, otherwise will create a node in the cmis repository false boolean sessionFacadeFactory (advanced) To use a custom CMISSessionFacadeFactory to create the CMISSessionFacade instances CMISSessionFacade Factory synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean password (security) Password for the cmis repository String username (security) Username for the cmis repository String 64.3. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.cmis.enabled Enable cmis component true Boolean camel.component.cmis.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.cmis.session-facade-factory To use a custom CMISSessionFacadeFactory to create the CMISSessionFacade instances. The option is a org.apache.camel.component.cmis.CMISSessionFacadeFactory type. String 64.4. Usage 64.4.1. Message headers evaluated by the producer Header Default Value Description CamelCMISFolderPath / The current folder to use during the execution. If not specified will use the root folder CamelCMISRetrieveContent false In queryMode this header will force the producer to retrieve the content of document nodes. CamelCMISReadSize 0 Max number of nodes to read. cmis:path null If CamelCMISFolderPath is not set, will try to find out the path of the node from this cmis property and it is name cmis:name null If CamelCMISFolderPath is not set, will try to find out the path of the node from this cmis property and it is path cmis:objectTypeId null The type of the node cmis:contentStreamMimeType null The mimetype to set for a document 64.4.2. Message headers set during querying Producer operation Header Type Description CamelCMISResultCount Integer Number of nodes returned from the query. The message body will contain a list of maps, where each entry in the map is cmis property and its value. If CamelCMISRetrieveContent header is set to true, one additional entry in the map with key CamelCMISContent will contain InputStream of the document type of nodes. 64.5. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cmis</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel (2.11 or higher). 64.6. See Also Configuring Camel Component Endpoint Getting Started | [
"cmis://cmisServerUrl[?options]",
"cmis:cmsUrl",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cmis</artifactId> <version>USD{camel-version}</version> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/cmis-component |
Chapter 3. What does the subscriptions service track? | Chapter 3. What does the subscriptions service track? The subscriptions service currently tracks and reports usage information for Red Hat Enterprise Linux, some Red Hat OpenShift products, and some Red Hat Ansible products. The subscriptions service identifies subscriptions through their stock-keeping units, or SKUs. Only a subset of Red Hat SKUs are tracked by the subscriptions service. In the usage reporting for a product, the tracked SKUs in your account contribute to the maximum capacity information, also known as the subscription threshold, for that product. For the SKUs that are not tracked, the subscriptions service maintains an explicit deny list within the source code. To learn more about the SKUs that are not tracked, you can view this deny list in the code repository. 3.1. Red Hat Enterprise Linux The subscriptions service tracks RHEL Annual subscription usage on physical systems, virtual systems, hypervisors, and public cloud. For a limited subset of subscriptions, currently Red Hat Enterprise Linux Extended Life Cycle Support Add-On on Amazon Web Services (AWS), it tracks RHEL pay-as-you-go On-Demand subscription usage for instances running in public cloud providers. If your RHEL installations predate certificate-based subscription management, the subscriptions service will not track that inventory. 3.1.1. RHEL with a traditional Annual subscription The subscriptions service tracks RHEL usage in sockets, as follows: Tracks physical RHEL usage in CPU sockets, where usage is counted by socket pairs. Tracks virtualized RHEL by the installed socket count for standard guest subscriptions with no detectable hypervisor management, where one virtual machine equals one socket. Tracks hypervisor RHEL usage in CPU sockets, with the socket-pair method, for virtual data center (VDC) subscriptions and similar virtualized environments. RHEL based hypervisors are counted both for the copy of RHEL that is used to run the hypervisor and the copy of RHEL for the virtual guests. Hypervisors that are not RHEL based are counted for the copy of RHEL for the virtual guests. Tracks public cloud RHEL instance usage in sockets, where one instance equals one socket. Additionally, tracks Red Hat Satellite to enable the visibility of RHEL that is bundled with Satellite. 3.1.2. RHEL with a pay-as-you-go On-Demand subscription The subscriptions service tracks metered RHEL in vCPU hours, as follows: Tracks pay-as-you-go On-Demand instance usage in virtual CPU hours (vCPU hours), a measurement of availability for computational activity on one virtual core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used. For RHEL pay-as-you-go On-Demand subscription usage, availability for computational activity is the availability of the RHEL instance over time. Note Currently, Red Hat Enterprise Linux for Third Party Linux Migration with Extended Life Cycle Support Add-on is the only RHEL pay-as-you-go On-Demand subscription offering that is tracked by the subscriptions service. The subscriptions service ultimately aggregates all instance vCPU hour data in the account into a monthly total, the unit of time that is used by the billing service for the cloud provider marketplace. 3.2. Red Hat OpenShift Generally, the subscriptions service tracks Red Hat OpenShift usage, such as the usage of Red Hat OpenShift Container Platform, as cluster size on physical and virtual systems. The cluster size is the sum of all subscribed nodes. A subscribed node is a compute or worker node that runs workloads, as opposed to a control plane or infrastructure node that manages the cluster. Beyond the general rule for cluster-based tracking by cluster size, this tracking is dependent on several factors: The Red Hat OpenShift product The type of subscription that was purchased for that product The version of that product The unit of measurement for the product, as defined by the subscription terms, that determines how cluster size and overall usage is calculated The structure of nodes, including any labels used to assign node roles and the configuration of scheduling to control pod placement on nodes However, there are exceptions for some other Red Hat OpenShift products and add-ons that track consumption of resources related to different types of workloads, such as data transfer and data storage for workload activities, or instance availability for consumption of control plane resources. 3.2.1. Impact of various factors on Red Hat OpenShift tracking The subscriptions service tracks and reports usage for fully managed and self-managed Red Hat OpenShift products in both physical and virtualized environments. Due to changes in the reporting models between Red Hat OpenShift major versions 3 and 4, usage data for version 3 is reported at the node level, while usage data for version 4 is reported and aggregated at the cluster level. The following information is more applicable to the version 4 reporting model, with data aggregated at the cluster level. Much of the work for the counting of Red Hat OpenShift usage takes place in the monitoring stack tools and OpenShift Cluster Manager. These tools then send core count or vCPU count data, as applicable, to the subscriptions service for usage reporting. The core and vCPU data is based on the subscribed cluster size, which is derived from the cluster nodes that are processing workloads. For fully managed Red Hat OpenShift products, such as Red Hat OpenShift Dedicated or Red Hat OpenShift AI, usage counting is generally time-based, measured in units such as core hours or vCPU hours. The infrastructure of the Red Hat managed environment is more consistently available to Red Hat, including the monitoring stack tools and OpenShift Cluster Manager. Data for subscribed nodes, the nodes that can accept workloads, is readily discoverable, as is data about cores, vCPUs and other facts that contribute to usage data for the subscriptions service. For self-managed Red Hat OpenShift products, such as Red Hat OpenShift Container Platform Annual and Red Hat OpenShift Container Platform On-Demand, usage counting is generally based on cores. The infrastructure of a customer-designed environment is less predictable, and in some cases facts pertinent to usage calculations might be less accessible, especially in a virtualized, x86-based environment. Because some of these facts might be less accessible, the usage counting process contains assumptions about simultaneous multithreading, also known as hyperthreading, that are applied when analyzing and reporting usage data for virtualized Red Hat OpenShift clusters for x86 architectures. These assumptions are necessary because some vendors offer hypervisors that do not expose data about simultaneous multithreading to the guests. Ongoing analysis and customer feedback have resulted in incremental improvements, both to the subscriptions service and to the associated data pipeline, that have enhanced the accuracy of usage counting for hyperthreading use cases. The foundational assumption currently used in the subscriptions service reporting is that simultaneous multithreading occurs at a factor of 2 threads per core. Internal research has shown that this factor is the most common configuration, applicable to a significant majority of customers. Therefore, assuming 2 threads per core follows common multithreading best practices and errs in favor of the small percentage of customers, approximately 10%, who are not using multithreading. This decision is the most equitable for all customers when deriving the number of cores from the observed number of threads. Note A limited amount of self-managed Red Hat OpenShift offerings are available as socket-based subscriptions. For those socket-based subscriptions, the hypervisor reports the number of sockets to the operating system, usually Red Hat Enterprise Linux CoreOS, and that socket count is sent to the subscriptions service for usage tracking. The subscriptions service tracks and reports socket-based subscriptions with the socket-pair method that is used for RHEL. 3.2.2. Core-based usage counting workflow for self-managed Red Hat OpenShift products For self-managed Red Hat OpenShift products such as Red Hat OpenShift Container Platform Annual and Red Hat OpenShift Container Platform On-Demand, the counting process initiated by the monitoring stack tools and OpenShift Cluster Manager works as follows: For a cluster, node types and node labels are examined to determine which nodes are subscribed nodes. A subscribed node is a node that can accept workloads. Only subscribed nodes contribute to usage counting for the subscriptions service. The chip architecture for the node is examined to determine if the architecture is x86-based. If the architecture is x86-based, then simultaneous multithreading, also known as hyperthreading, must be considered during the usage counting. If the chip architecture is not x86-based, the monitoring stack counts usage according to the cores associated with the subscribed nodes and sends that core count to the subscriptions service. If the chip architecture is x86-based, the monitoring stack counts usage according to the number of threads on the subscribed nodes. Threads equate to vCPUs, according to the Red Hat definition of vCPUs. This counting method applies whether the multithreading data can accurately be detected, the multithreading data is ambiguous or missing, or multithreading data is specifically set to a value of false on the node. Based on a global assumption of multithreading at a factor of 2, the number of threads is divided by 2 to determine the number of cores. The core count is then sent to the subscriptions service. 3.2.3. Understanding the subscribed cluster size compared to the total cluster size For Red Hat OpenShift, the subscriptions service does not focus merely on the total size of the cluster and the nodes within it. The subscriptions service focuses on the subscribed portion of clusters, that is, the cluster nodes that are processing workloads. Therefore, the subscriptions service reporting is for the subscribed cluster size , not the entire size of the cluster. 3.2.4. Determining the subscribed cluster size To determine the subscribed cluster size, the data collection tools and the subscriptions service examine both the node type and the presence of node labels. The subscriptions service uses this data to determine which nodes can accept workloads. The sum of all noninfrastructure nodes plus master nodes that are schedulable is considered available for workload use. The nodes that are available for workload use are counted as subscribed nodes, contribute to the subscribed cluster size, and appear in the usage reporting for the subscriptions service. The following information provides additional details about how node labels affect the countability of those nodes and in turn affect subscribed cluster size. Analysis of both internal and customer environments shows that these labels and label combinations represent the majority of customer configurations. Table 3.1. How nodes contribute to the subscribed cluster size Node label Usage counted Exceptions worker yes Unless there is a combination of the worker label with an infra label worker + infra no See Note custom label yes Unless there is a combination of the custom label with the master, infra, or control plane label custom label + master, infra, control plane (any combination) no master + infra + control plane (any combination) no Unless there is a master label present and the node is marked as schedulable schedulable master + infra, control plane (any combination) yes Note A known issue with the Red Hat OpenShift monitoring stack tools can result in unexpected core counts for Red Hat OpenShift Container Platform versions earlier than 4.12. For those versions, the number of worker nodes can be artificially elevated. For OpenShift Container Platform versions earlier than 4.12, the Machine Config Operator does not support a dual assignment of infra and worker roles on a node. The counting of worker nodes is correct in OpenShift Container Platform according to the principles of counting subscribed nodes, and this count will display correctly in the OpenShift Container Platform web console. However, when the monitoring stack tools analyze this data and send it to the subscriptions service and other services in the Hybrid Cloud Console, the Machine Config Operator ignores the dual roles and sets the role on the node to worker. Therefore, worker node counts will be elevated in the subscriptions service and in OpenShift Cluster Manager. 3.2.5. Red Hat OpenShift Container Platform with a traditional Annual subscription The subscriptions service tracks Red Hat OpenShift Container Platform usage in CPU cores or sockets for clusters and aggregates this data into an account view, as refined by the following version support: RHOCP 4.1 and later with Red Hat Enterprise Linux CoreOS based nodes or a mixed environment of Red Hat Enterprise Linux CoreOS and RHEL based nodes RHOCP 3.11 For RHOCP subscription usage, there was a change in reporting models between the major 3 and 4 versions. Version 3 usage is considered at the node level and version 4 usage is considered at the cluster level. The difference in reporting models for the RHOCP major versions also results in some differences in how the subscriptions service and the associated services in the Hybrid Cloud Console calculate usage. For RHOCP version 4, the subscriptions service follows the rules for examining node types and node labels to calculate the subscribed cluster size as described in Determining the subscribed cluster size . The subscriptions service recognizes and ignores the parts of the cluster that perform overhead tasks and do not accept workloads. The subscriptions service recognizes and tracks only the parts of the cluster that do accept workloads. However, for RHOCP version 3.11, the version 3 era reporting model cannot distinguish the parts of the cluster that perform overhead tasks and do not accept workloads, so the reporting model cannot find the subscribed and nonsubscribed nodes. Therefore, for RHOCP version 3.11, you can assume that approximately 15% of the subscription data reported by the subscriptions service is overhead for the nonsubscribed nodes that perform infrastructure-related tasks. This percentage is based on analysis of cluster overhead in RHOCP version 3 installations. In this particular case, usage results that show up to 15% over capacity are likely to still be in compliance. 3.2.6. Red Hat OpenShift Container Platform or Red Hat OpenShift Dedicated with a pay-as-you-go On-Demand subscription RHOCP or OpenShift Dedicated 4.7 and later The subscriptions service tracks RHOCP or OpenShift Dedicated 4.7 and later usage from a pay-as-you-go On-Demand subscription in core hours, a measurement of cluster size in CPU cores over a range of time. For an OpenShift Dedicated On-Demand subscription, consumption of control plane resources by the availability of the service instance is tracked in instance hours. The subscriptions service ultimately aggregates all cluster core hour and instance hour data in the account into a monthly total, the unit of time that is used by the billing service for Red Hat Marketplace. As described in the information about RHOCP 4.1 and later, The subscriptions service recognizes and tracks only the parts of the cluster that contain compute nodes, also commonly called worker nodes. 3.2.7. Red Hat OpenShift Service on AWS Hosted Control Planes with a pre-paid plus On-Demand subscription The subscriptions service tracks Red Hat OpenShift Service on AWS Hosted Control Planes (ROSA Hosted Control Planes) usage from a pre-paid plus On-Demand subscription in vCPU hours and in control plane hours. A vCPU hour is a measurement of availability for computational activity on one virtual core (as defined by the subscription terms) for a total of one hour, measured to the granularity of the meter that is used. For ROSA Hosted Control Planes, availability for computational activity is the availability of the vCPUs for the ROSA Hosted Control Planes subscribed clusters over time. A subscribed cluster is comprised of subscribed nodes, which are the noninfrastructure nodes plus schedulable master nodes that are available for workload use, if applicable. Note that for ROSA Hosted Control Planes, schedulable master nodes are not applicable, unlike other products that also use this measurement. The vCPUs that are available to run the workloads for a subscribed cluster contribute to the vCPU hour count. A control plane hour is a measurement of the availability of the control plane. With ROSA Hosted Control Planes, each cluster has a dedicated control plane that is isolated in a ROSA Hosted Control Planes service account that is owned by Red Hat. 3.2.8. Red Hat OpenShift AI with a pay-as-you-go On-Demand subscription The subscriptions service tracks Red Hat OpenShift AI (RHOAI) in vCPU hours, a measurement of availability for computational activity on one virtual core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used. For RHOAI pay-as-you-go On-Demand subscription usage, the availability for computational activity is the availability of the cluster over time. The subscriptions service ultimately aggregates all cluster vCPU hour data in the account into a monthly total, the unit of time that is used by the billing service for the cloud provider marketplace. 3.2.9. Red Hat Advanced Cluster Security for Kubernetes with a pay-as-you-go On-Demand subscription The subscriptions service tracks Red Hat Advanced Cluster Security for Kubernetes (RHACS) in vCPU hours, a measurement of availability for computational activity on one virtual core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used. For RHACS pay-as-you-go On-Demand subscription usage, the availability for computational activity is the availability of the cluster over time. The subscriptions service aggregates all cluster vCPU hour data and then sums the data for each cluster where RHACS is running into a monthly total, the unit of time that is used by the billing service for the cloud provider marketplace. 3.3. Red Hat Ansible The subscriptions service tracks usage of Red Hat Ansible products according to the consumption of resources related to different types of workloads, such as the number of managed execution nodes that run playbooks or instance availability for the control plane. 3.3.1. Red Hat Ansible Automation Platform, as a managed service The subscriptions service tracks usage of the managed service offering of Red Hat Ansible Automation Platform in managed nodes and infrastructure hours. A managed node is a measurement of the number of unique managed nodes that are used within the monthly billing cycle, where the usage is tracked by the invoking of an Ansible task against that node. An infrastructure hour is a measurement of the availability of the Ansible Automation Platform infrastructure. Each deployment of Ansible Automation Platform has a dedicated control plane that is isolated in a service account that is owned and managed by Red Hat. Additional resources For more information about the purpose of the subscriptions service deny list, see the What subscriptions (SKUs) are included in Subscription Usage? article. For more information about the contents of the subscriptions service deny list, including the specific SKUs in that list, see the deny list source code in GitHub. | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/ref-what-does-subscriptionwatch-track_assembly-about-subscriptionwatch-ctxt |
Appendix A. Component Versions | Appendix A. Component Versions This appendix provides a list of key components and their versions in the Red Hat Enterprise Linux 7.6 release. Table A.1. Component Versions Component Version kernel 3.10.0-957 kernel-alt 4.14.0-115 QLogic qla2xxx driver 10.00.00.06.07.6-k QLogic qla4xxx driver 5.04.00.00.07.02-k0 Emulex lpfc driver 0:12.0.0.5 iSCSI initiator utils ( iscsi-initiator-utils ) 6.2.0.874-10 DM-Multipath ( device-mapper-multipath ) 0.4.9-123 LVM ( lvm2 ) 2.02.180-8 qemu-kvm [a] 1.5.3-160 qemu-kvm-ma [b] 2.12.0-18 [a] The qemu-kvm packages provide KVM virtualization on AMD64 and Intel 64 systems. [b] The qemu-kvm-ma packages provide KVM virtualization on IBM POWER8, IBM POWER9, and IBM Z. Note that KVM virtualization on IBM POWER9 and IBM Z also requires using the kernel-alt packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/component_versions |
A.17. libguestfs Troubleshooting | A.17. libguestfs Troubleshooting A test tool is available to check that libguestfs is working. Enter the following command after installing libguestfs (root access not required) to test for normal operation: This tool prints a large amount of text to test the operation of libguestfs. If the test is successful, the following text will appear near the end of the output: | [
"libguestfs-test-tool",
"===== TEST FINISHED OK ====="
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-libguestfs_troubleshooting |
Chapter 2. Certification prerequisites | Chapter 2. Certification prerequisites Note A strong working knowledge of Red Hat Enterprise Linux and Red Hat OpenStack is required. A Red Hat Certified Engineer and a Red Hat OpenStack Platform Certified Engineer accreditation is preferred and suggested before participating. 2.1. Partner eligibility criteria Ensure to meet the following requirements before applying for a Red Hat bare-metal hardware certification: You are part of the Red Hat Hardware Certification program . You are in a support relationship with Red Hat by means of the TSANet network or a custom support agreement. 2.2. Certification targets The certification targets provide details and requirements about the components and products relevant to the certification. Specific information for each of the certification components is provided when applicable. 2.2.1. Server Ensure that the server must have the following certifications: Red Hat Enterprise Linux System Red Hat OpenStack Platform Compute Node Each certification is keyed to the specific Cloud Platform product version and its associated ironic revision. You can certify your server for RHOSP, if your hardware is compatible with the ironic drivers for that platform. The server must have a baseboard management controller (BMC) installed. 2.2.2. Red Hat Cloud Platform Products Bare-metal certification Through this program you can certify BMC and bare-metal servers on Red Hat OpenStack Platform 17.1. 2.2.3. Baseboard management controllers (BMC) A BMC is a specialized microcontroller on a server's motherboard that manages the interface between systems management software and physical hardware. The bare metal service in Red Hat Platforms provisions systems in a cluster by using the BMC to control power, network booting, and automate node deployment and termination. BMC can be certified as a component for use in leveraging components, across multiple server systems. Similar to Red Hat Hardware Certification programs, Red Hat leverages partners' internal quality testing to streamline the certification process without adding risk to customer environments. Red Hat recommends partners using component leveraging features in bare-metal hardware certifications conduct their testing with the specific server system, BMC, and Red Hat cloud platform product to validate each combination. However, you do not need to submit individual certification results to Red Hat for every combination. 2.2.4. Bare Metal Drivers IPI component certification BMCs must use supported Red Hat OpenStack Platform Bare Metal Drivers provided in the corresponding Red Hat Cloud platform product. You cannot certify a BMC that requires an ironic driver that is not included in the Red Hat product. | null | https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_openstack_platform_hardware_bare_metal_certification_policy_guide/assembly-prerequisites_rhosp-bm-pol-introduction |
22.5. UTC, Timezones, and DST | 22.5. UTC, Timezones, and DST As NTP is entirely in UTC (Universal Time, Coordinated), Timezones and DST (Daylight Saving Time) are applied locally by the system. The file /etc/localtime is a copy of, or symlink to, a zone information file from /usr/share/zoneinfo . The RTC may be in localtime or in UTC, as specified by the 3rd line of /etc/adjtime , which will be one of LOCAL or UTC to indicate how the RTC clock has been set. Users can easily change this setting using the check box System Clock Uses UTC in the system-config-date graphical configuration tool. See Chapter 2, Date and Time Configuration for information on how to use that tool. Running the RTC in UTC is recommended to avoid various problems when daylight saving time is changed. The operation of ntpd is explained in more detail in the man page ntpd(8) . The resources section lists useful sources of information. See Section 22.19, "Additional Resources" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-utc_timezones_and_dst |
Chapter 88. Mail Microsoft Oauth | Chapter 88. Mail Microsoft Oauth Since Camel 3.18.4 . The Mail Microsoft OAuth2 provides an implementation of org.apache.camel.component.mail.MailAuthenticator to authenticate IMAP/POP/SMTP connections and access to Email via Spring's Mail support and the underlying JavaMail system. 88.1. Dependencies Add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mail-microsoft-oauth</artifactId> </dependency> Importing camel-mail-microsoft-oauth will automatically import camel-mail component. 88.2. Microsoft Exchange Online OAuth2 Mail Authenticator IMAP sample To use OAuth, an application must be registered with Azure Active Directory. Follow the instructions to register a new application. Procedure Enable the application to access Exchange mailboxes via client credentials flow. For more information, see Authenticate an IMAP, POP or SMTP connection using OAuth Once everything is set up, declare and register in the registry, an instance of org.apache.camel.component.mail.MicrosoftExchangeOnlineOAuth2MailAuthenticator . For Example, in a Spring Boot application: @BindToRegistry("auth") public MicrosoftExchangeOnlineOAuth2MailAuthenticator exchangeAuthenticator(){ return new MicrosoftExchangeOnlineOAuth2MailAuthenticator(tenantId, clientId, clientSecret, "[email protected]"); } Then reference it in the Camel URI as follows: from("imaps://outlook.office365.com:993" + "?authenticator=#auth" + "&mail.imaps.auth.mechanisms=XOAUTH2" + "&debugMode=true" + "&delete=false") | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mail-microsoft-oauth</artifactId> </dependency>",
"@BindToRegistry(\"auth\") public MicrosoftExchangeOnlineOAuth2MailAuthenticator exchangeAuthenticator(){ return new MicrosoftExchangeOnlineOAuth2MailAuthenticator(tenantId, clientId, clientSecret, \"[email protected]\"); }",
"from(\"imaps://outlook.office365.com:993\" + \"?authenticator=#auth\" + \"&mail.imaps.auth.mechanisms=XOAUTH2\" + \"&debugMode=true\" + \"&delete=false\")"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mail-microsoft-oauth-component-starter |
Chapter 4. Distribution of content in RHEL 8 | Chapter 4. Distribution of content in RHEL 8 4.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Binary DVD ISO image is larger than 4.7 GB, and as a result, it might not fit on a single-layer DVD. A dual-layer DVD or USB key is recommended when using the Binary DVD ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 4.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 4.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 4.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.1_release_notes/distribution-of-content-in-rhel-8 |
Chapter 3. Playing back recorded sessions | Chapter 3. Playing back recorded sessions There are two methods for replaying recorded sessions: the tlog-play tool the RHEL 8 web console, also referred to as Cockpit . 3.1. Playback with tlog-play You can use the tlog-play tool to play back session recordings in a terminal. The tlog-play tool is a playback program for terminal input and output recorded with the tlog-rec tool. It reproduces the recording of the terminal it is under, but cannot change its size. For this reason the playback terminal needs to match the recorded terminal size for proper playback. The tlog-play tool loads its parameters from the /etc/tlog/tlog-play.conf configuration file. You can override those parameters with command line options described in the tlog-play manual pages. 3.2. Playback with the web console The RHEL 8 web console has a whole interface for managing recorded sessions. You can choose the session you want to review directly from the Session Recording page, where the list of your recorded session is. Example 3.1. Example list of recorded sessions The web console player supports window resizing. 3.3. Playing back recorded sessions with tlog-play You can play back session recordings from exported log files or from the Systemd Journal. Playing back from a file You can play a session back from a file both during and after recording: Playing back from the Journal Generally, you can select Journal log entries for playback using Journal matches and timestamp limits, with the -M or --journal-match , -S or --journal-since , and -U or --journal-until options. In practice however, playback from Journal is usually done with a single match against the TLOG_REC Journal field. The TLOG_REC field contains a copy of the rec field from the logged JSON data, which is a host-unique ID of the recording. You can take the ID either from the TLOG_REC field value directly, or from the MESSAGE field from the JSON rec field. Both fields are part of log messages coming from the tlog-rec-session tool. Procedure You can play back the whole recording as follows: You can find further instructions and documentation in the tlog-play manual pages. | [
"tlog-play --reader=file --file-path=tlog.log",
"tlog-play -r journal -M TLOG_REC=<your-unique-host-id>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/recording_sessions/playing-back-a-recorded-session-getting-started-with-session-recording |
Chapter 7. Booting the installation media | Chapter 7. Booting the installation media You can boot the Red Hat Enterprise Linux installation using a USB or DVD. You can register RHEL using the Red Hat Content Delivery Network (CDN). CDN is a geographically distributed series of web servers. These servers provide, for example, packages and updates to RHEL hosts with a valid subscription. During the installation, registering and installing RHEL from the CDN offers following benefits: Utilizing the latest packages for an up-to-date system immediately after installation and Integrated support for connecting to Red Hat Insights and enabling System Purpose. Prerequisite You have created a bootable installation media (USB or DVD). Procedure Power off the system to which you are installing Red Hat Enterprise Linux. Disconnect any drives from the system. Power on the system. Insert the bootable installation media (USB, DVD, or CD). Power off the system but do not remove the boot media. Power on the system. You might need to press a specific key or combination of keys to boot from the media or configure the Basic Input/Output System (BIOS) of your system to boot from the media. For more information, see the documentation that came with your system. The Red Hat Enterprise Linux boot window opens and displays information about a variety of available boot options. Use the arrow keys on your keyboard to select the boot option that you require, and press Enter to select the boot option. The Welcome to Red Hat Enterprise Linux window opens and you can install Red Hat Enterprise Linux using the graphical user interface. The installation program automatically begins if no action is performed in the boot window within 60 seconds. Optional: Edit the available boot options: UEFI-based systems: Press E to enter edit mode. Change the predefined command line to add or remove boot options. Press Enter to confirm your choice. BIOS-based systems: Press the Tab key on your keyboard to enter edit mode. Change the predefined command line to add or remove boot options. Press Enter to confirm your choice. Additional Resources Customizing the system in the installer Boot options reference | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/booting-the-installer-from-local-media_rhel-installer |
probe::workqueue.insert | probe::workqueue.insert Name probe::workqueue.insert - Queuing work on a workqueue Synopsis workqueue.insert Values wq_thread task_struct of the workqueue thread work_func pointer to handler function work work_struct* being queued | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-workqueue-insert |
7.6. Removing Lost Physical Volumes from a Volume Group | 7.6. Removing Lost Physical Volumes from a Volume Group If you lose a physical volume, you can activate the remaining physical volumes in the volume group with the --partial argument of the vgchange command. You can remove all the logical volumes that used that physical volume from the volume group with the --removemissing argument of the vgreduce command. It is recommended that you run the vgreduce command with the --test argument to verify what you will be destroying. Like most LVM operations, the vgreduce command is reversible in a sense if you immediately use the vgcfgrestore command to restore the volume group metadata to its state. For example, if you used the --removemissing argument of the vgreduce command without the --test argument and find you have removed logical volumes you wanted to keep, you can still replace the physical volume and use another vgcfgrestore command to return the volume group to its state. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lost_pv_remove_from_vg |
Chapter 18. PersistentClaimStorage schema reference | Chapter 18. PersistentClaimStorage schema reference Used in: JbodStorage , KafkaClusterSpec , KafkaNodePoolSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the PersistentClaimStorage type from EphemeralStorage . It must have the value persistent-claim for the type PersistentClaimStorage . Property Property type Description type string Must be persistent-claim . size string When type=persistent-claim , defines the size of the persistent volume claim, such as 100Gi. Mandatory when type=persistent-claim . selector map Specifies a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume. deleteClaim boolean Specifies if the persistent volume claim has to be deleted when the cluster is un-deployed. class string The storage class to use for dynamic volume allocation. id integer Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. overrides PersistentClaimStorageOverride array Overrides for individual brokers. The overrides field allows to specify a different configuration for different brokers. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-persistentclaimstorage-reference |
Chapter 1. Introduction to the Identity Service (keystone) | Chapter 1. Introduction to the Identity Service (keystone) As a cloud administrator, you can manage projects, users, and roles. Projects are organizational units containing a collection of resources. You can assign users to roles within projects. Roles define the actions that those users can perform on the resources within a given project. Users can be assigned roles in multiple projects. Each Red Hat OpenStack (RHOSP) deployment must include at least one user assigned to a role within a project. As a cloud administrator, you can: Add, update, and delete projects and users. Assign users to one or more roles, and change or remove these assignments. Manage projects and users independently from each other. You can also configure user authentication with the Identity service (keystone)to control access to services and endpoints. The Identity service provides token-based authentication and can integrate with LDAP and Active Directory, so you can manage users and identities externally and synchronize the user data with the Identity service. 1.1. Resource credential files When you install Red Hat OpenStack Platform director, a resource credentials (RC) file is automatically generated: Source the stackrc file to export authentication details into your shell environment. This allows you to run commands against the local Red Hat OpenStack Platform director API. The name of the RC file generated during the installation of the overcloud is the name of the deployed stack suffixed with 'rc'. If you do not provide a custom name for your stack, then the stack is labeled overcloud . An RC file is created called overcloudrc : The overcloud RC file is referred to as overcloudrc in the documentation, regardless of the actual name of your stack. Source the overcloudrc file to export authentication details into your shell environment. This allows you to run commands against the control plane API of your overcloud cluster. The automatically generated overcloudrc file will authenticate you as the admin user to the admin project. This authentication is valuable for domain administrative tasks, such as creating provider networks or projects. 1.2. OpenStack regions A region is a division of an OpenStack deployment. Each region has its own full OpenStack deployment, including its own API endpoints, networks and compute resources. Different regions share one set of Identity service (keystone) and Dashboard service (horizon) services to provide access control and a web interface. Red Hat OpenStack Platform is deployed with a single region. By default, your overcloud region is named regionOne . You can change the default region name in Red Hat OpenStack Platform. Procedure Under parameter_defaults , define the KeystoneRegion parameter: Replace <sample_region> with a region name of your choice. Note You cannot modify the region name after you deploy the overcloud. | [
"Clear any old environment that may conflict. for key in USD( set | awk -F= '/^OS_/ {print USD1}' ); do unset \"USD{key}\" ; done export OS_CLOUD=undercloud Add OS_CLOUDNAME to PS1 if [ -z \"USD{CLOUDPROMPT_ENABLED:-}\" ]; then export PS1=USD{PS1:-\"\"} export PS1=\\USD{OS_CLOUD:+\"(\\USDOS_CLOUD)\"}\\ USDPS1 export CLOUDPROMPT_ENABLED=1 fi export PYTHONWARNINGS=\"ignore:Certificate has no, ignore:A true SSLContext object is not available\"",
"Clear any old environment that may conflict. for key in USD( set | awk '{FS=\"=\"} /^OS_/ {print USD1}' ); do unset USDkey ; done export OS_USERNAME=admin export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_NO_CACHE=True export OS_CLOUDNAME=overcloud export no_proxy=10.0.0.145,192.168.24.27 export PYTHONWARNINGS='ignore:Certificate has no, ignore:A true SSLContext object is not available' export OS_AUTH_TYPE=password export OS_PASSWORD=mpWt4y0Qhc9oTdACisp4wgo7F export OS_AUTH_URL=http://10.0.0.145:5000 export OS_IDENTITY_API_VERSION=3 export OS_COMPUTE_API_VERSION=2.latest export OS_IMAGE_API_VERSION=2 export OS_VOLUME_API_VERSION=3 export OS_REGION_NAME=regionOne Add OS_CLOUDNAME to PS1 if [ -z \"USD{CLOUDPROMPT_ENABLED:-}\" ]; then export PS1=USD{PS1:-\"\"} export PS1=\\USD{OS_CLOUDNAME:+\"(\\USDOS_CLOUDNAME)\"}\\ USDPS1 export CLOUDPROMPT_ENABLED=1 fi",
"parameter_defaults: KeystoneRegion: '<sample_region>'"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_openstack_identity_resources/assembly_introduction-to-the-identity-service |
Chapter 3. Upgrading metering | Chapter 3. Upgrading metering Important Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. You can upgrade metering to 4.7 by updating the Metering Operator subscription. 3.1. Prerequisites The cluster is updated to 4.7. The Metering Operator is installed from OperatorHub. Note You must upgrade the Metering Operator to 4.7 manually. Metering does not upgrade automatically if you selected the "Automatic" Approval Strategy in a installation. The MeteringConfig custom resource is configured. The metering stack is installed. Ensure that metering status is healthy by checking that all pods are ready. Important Potential data loss can occur if you modify your metering storage configuration after installing or upgrading metering. Procedure Click Operators Installed Operators from the web console. Select the openshift-metering project. Click Metering Operator . Click Subscription Channel . In the Change Subscription Update Channel window, select 4.7 and click Save . Note Wait several seconds to allow the subscription to update before proceeding to the step. Click Operators Installed Operators . The Metering Operator is shown as 4.7. For example: Verification You can verify the metering upgrade by performing any of the following checks: Check the Metering Operator cluster service version (CSV) for the new metering version. This can be done through either the web console or CLI. Procedure (UI) Navigate to Operators Installed Operators in the metering namespace. Click Metering Operator . Click Subscription for Subscription Details . Check the Installed Version for the upgraded metering version. The Starting Version shows the metering version prior to upgrading. Procedure (CLI) Check the Metering Operator CSV: USD oc get csv | grep metering Example output for metering upgrade from 4.6 to 4.7 NAME DISPLAY VERSION REPLACES PHASE metering-operator.4.7.0-202007012112.p0 Metering 4.7.0-202007012112.p0 metering-operator.4.6.0-202007012112.p0 Succeeded Check that all required pods in the openshift-metering namespace are created. This can be done through either the web console or CLI. Note Many pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator upgrade. Procedure (UI) Navigate to Workloads Pods in the metering namespace and verify that pods are being created. This can take several minutes after upgrading the metering stack. Procedure (CLI) Check that all required pods in the openshift-metering namespace are created: USD oc -n openshift-metering get pods Example output NAME READY STATUS RESTARTS AGE hive-metastore-0 2/2 Running 0 3m28s hive-server-0 3/3 Running 0 3m28s metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s presto-coordinator-0 2/2 Running 0 3m9s reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s Verify that the ReportDataSource resources are importing new data, indicated by a valid timestamp in the NEWEST METRIC column. This might take several minutes. Filter out the "-raw" ReportDataSource resources, which do not import data: USD oc get reportdatasources -n openshift-metering | grep -v raw Timestamps in the NEWEST METRIC column indicate that ReportDataSource resources are beginning to import new data. Example output NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE node-allocatable-cpu-cores 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:56:44Z 23h node-allocatable-memory-bytes 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:52:07Z 23h node-capacity-cpu-cores 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:56:52Z 23h node-capacity-memory-bytes 2020-05-18T21:10:00Z 2020-05-19T19:57:00Z 2020-05-18T19:10:00Z 2020-05-19T19:57:00Z 2020-05-19T19:57:03Z 23h persistentvolumeclaim-capacity-bytes 2020-05-18T21:09:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:56:46Z 23h persistentvolumeclaim-phase 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:52:36Z 23h persistentvolumeclaim-request-bytes 2020-05-18T21:10:00Z 2020-05-19T19:57:00Z 2020-05-18T19:10:00Z 2020-05-19T19:57:00Z 2020-05-19T19:57:03Z 23h persistentvolumeclaim-usage-bytes 2020-05-18T21:09:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:52:02Z 23h pod-limit-cpu-cores 2020-05-18T21:10:00Z 2020-05-19T19:57:00Z 2020-05-18T19:10:00Z 2020-05-19T19:57:00Z 2020-05-19T19:57:02Z 23h pod-limit-memory-bytes 2020-05-18T21:10:00Z 2020-05-19T19:58:00Z 2020-05-18T19:11:00Z 2020-05-19T19:58:00Z 2020-05-19T19:59:06Z 23h pod-persistentvolumeclaim-request-info 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:52:07Z 23h pod-request-cpu-cores 2020-05-18T21:10:00Z 2020-05-19T19:58:00Z 2020-05-18T19:11:00Z 2020-05-19T19:58:00Z 2020-05-19T19:58:57Z 23h pod-request-memory-bytes 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:55:32Z 23h pod-usage-cpu-cores 2020-05-18T21:09:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:54:55Z 23h pod-usage-memory-bytes 2020-05-18T21:08:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:55:00Z 23h report-ns-pvc-usage 5h36m report-ns-pvc-usage-hourly After all pods are ready and you have verified that new data is being imported, metering continues to collect data and report on your cluster. Review a previously scheduled report or create a run-once metering report to confirm the metering upgrade. | [
"Metering 4.7.0-202007012112.p0 provided by Red Hat, Inc",
"oc get csv | grep metering",
"NAME DISPLAY VERSION REPLACES PHASE metering-operator.4.7.0-202007012112.p0 Metering 4.7.0-202007012112.p0 metering-operator.4.6.0-202007012112.p0 Succeeded",
"oc -n openshift-metering get pods",
"NAME READY STATUS RESTARTS AGE hive-metastore-0 2/2 Running 0 3m28s hive-server-0 3/3 Running 0 3m28s metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s presto-coordinator-0 2/2 Running 0 3m9s reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s",
"oc get reportdatasources -n openshift-metering | grep -v raw",
"NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE node-allocatable-cpu-cores 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:56:44Z 23h node-allocatable-memory-bytes 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:52:07Z 23h node-capacity-cpu-cores 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:56:52Z 23h node-capacity-memory-bytes 2020-05-18T21:10:00Z 2020-05-19T19:57:00Z 2020-05-18T19:10:00Z 2020-05-19T19:57:00Z 2020-05-19T19:57:03Z 23h persistentvolumeclaim-capacity-bytes 2020-05-18T21:09:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:56:46Z 23h persistentvolumeclaim-phase 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:52:36Z 23h persistentvolumeclaim-request-bytes 2020-05-18T21:10:00Z 2020-05-19T19:57:00Z 2020-05-18T19:10:00Z 2020-05-19T19:57:00Z 2020-05-19T19:57:03Z 23h persistentvolumeclaim-usage-bytes 2020-05-18T21:09:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:52:02Z 23h pod-limit-cpu-cores 2020-05-18T21:10:00Z 2020-05-19T19:57:00Z 2020-05-18T19:10:00Z 2020-05-19T19:57:00Z 2020-05-19T19:57:02Z 23h pod-limit-memory-bytes 2020-05-18T21:10:00Z 2020-05-19T19:58:00Z 2020-05-18T19:11:00Z 2020-05-19T19:58:00Z 2020-05-19T19:59:06Z 23h pod-persistentvolumeclaim-request-info 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:52:07Z 23h pod-request-cpu-cores 2020-05-18T21:10:00Z 2020-05-19T19:58:00Z 2020-05-18T19:11:00Z 2020-05-19T19:58:00Z 2020-05-19T19:58:57Z 23h pod-request-memory-bytes 2020-05-18T21:10:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:55:32Z 23h pod-usage-cpu-cores 2020-05-18T21:09:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:54:55Z 23h pod-usage-memory-bytes 2020-05-18T21:08:00Z 2020-05-19T19:52:00Z 2020-05-18T19:11:00Z 2020-05-19T19:52:00Z 2020-05-19T19:55:00Z 23h report-ns-pvc-usage 5h36m report-ns-pvc-usage-hourly"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/metering/upgrading-metering |
Chapter 14. Enabling SSL/TLS on overcloud public endpoints | Chapter 14. Enabling SSL/TLS on overcloud public endpoints By default, the overcloud uses unencrypted endpoints for the overcloud services. To enable SSL/TLS in your overcloud, Red Hat recommends that you use a certificate authority (CA) solution. When you use a certificate authority (CA) solution, you have production ready solutions such as a certificate renewals, certificate revocation lists (CRLs), and industry accepted cryptography. For information on using Red Hat Identity Manager (IdM) as a CA, see Implementing TLS-e with Ansible . You can use the following manual process to enable SSL/TLS for Public API endpoints only, the Internal and Admin APIs remain unencrypted. You must also manually update SSL/TLS certificates if you do not use a CA. For more information, see Manually updating SSL/TLS certificates . Prerequisites Network isolation to define the endpoints for the Public API. The openssl-perl package is installed. You have an SSL/TLS certificate. For more information see Configuring custom SSL/TLS certificates . 14.1. Initializing the signing host The signing host is the host that generates and signs new certificates with a certificate authority. If you have never created SSL certificates on the chosen signing host, you might need to initialize the host so that it can sign new certificates. Procedure The /etc/pki/CA/index.txt file contains records of all signed certificates. Ensure that the filesystem path and index.txt file are present: The /etc/pki/CA/serial file identifies the serial number to use for the certificate to sign. Check if this file exists. If the file does not exist, create a new file with a new starting value: 14.2. Creating a certificate authority Normally you sign your SSL/TLS certificates with an external certificate authority. In some situations, you might want to use your own certificate authority. For example, you might want to have an internal-only certificate authority. Procedure Generate a key and certificate pair to act as the certificate authority: The openssl req command requests certain details about your authority. Enter these details at the prompt. These commands create a certificate authority file called ca.crt.pem . Set the certificate location as the value for the PublicTLSCAFile parameter in the enable-tls.yaml file. When you set the certificate location as the value for the PublicTLSCAFile parameter, you ensure that the CA certificate path is added to the clouds.yaml authentication file. 14.3. Adding the certificate authority to clients For any external clients aiming to communicate using SSL/TLS, copy the certificate authority file to each client that requires access to your Red Hat OpenStack Platform environment. Procedure Copy the certificate authority to the client system: After you copy the certificate authority file to each client, run the following command on each client to add the certificate to the certificate authority trust bundle: 14.4. Creating an SSL/TLS key Enabling SSL/TLS on an OpenStack environment requires an SSL/TLS key to generate your certificates. Procedure Run the following command to generate the SSL/TLS key ( server.key.pem ): 14.5. Creating an SSL/TLS certificate signing request Complete the following steps to create a certificate signing request. Procedure Copy the default OpenSSL configuration file: Edit the new openssl.cnf file and configure the SSL parameters that you want to use for director. An example of the types of parameters to modify include: Set the commonName_default to one of the following entries: If you are using an IP address to access director over SSL/TLS, use the undercloud_public_host parameter in the undercloud.conf file. If you are using a fully qualified domain name to access director over SSL/TLS, use the domain name. Edit the alt_names section to include the following entries: IP - A list of IP addresses that clients use to access director over SSL. DNS - A list of domain names that clients use to access director over SSL. Also include the Public API IP address as a DNS entry at the end of the alt_names section. Note For more information about openssl.cnf , run the man openssl.cnf command. Run the following command to generate a certificate signing request ( server.csr.pem ): Ensure that you include your OpenStack SSL/TLS key with the -key option. This command generates a server.csr.pem file, which is the certificate signing request. Use this file to create your OpenStack SSL/TLS certificate. 14.6. Creating the SSL/TLS certificate To generate the SSL/TLS certificate for your OpenStack environment, the following files must be present: openssl.cnf The customized configuration file that specifies the v3 extensions. server.csr.pem The certificate signing request to generate and sign the certificate with a certificate authority. ca.crt.pem The certificate authority, which signs the certificate. ca.key.pem The certificate authority private key. Procedure Create the newcerts directory if it does not already exist: Run the following command to create a certificate for your undercloud or overcloud: This command uses the following options: -config Use a custom configuration file, which is the openssl.cnf file with v3 extensions. -extensions v3_req Enabled v3 extensions. -days Defines how long in days until the certificate expires. -in ' The certificate signing request. -out The resulting signed certificate. -cert The certificate authority file. -keyfile The certificate authority private key. This command creates a new certificate named server.crt.pem . Use this certificate in conjunction with your OpenStack SSL/TLS key 14.7. Enabling SSL/TLS To enable SSL/TLS in your overcloud, you must create an environment file that contains parameters for your SSL/TLS certiciates and private key. Procedure Copy the enable-tls.yaml environment file from the heat template collection: Edit this file and make the following changes for these parameters: SSLCertificate Copy the contents of the certificate file ( server.crt.pem ) into the SSLCertificate parameter: Important The certificate contents require the same indentation level for all new lines. SSLIntermediateCertificate If you have an intermediate certificate, copy the contents of the intermediate certificate into the SSLIntermediateCertificate parameter: Important The certificate contents require the same indentation level for all new lines. SSLKey Copy the contents of the private key ( server.key.pem ) into the SSLKey parameter: Important The private key contents require the same indentation level for all new lines. 14.8. Injecting a root certificate If the certificate signer is not in the default trust store on the overcloud image, you must inject the certificate authority into the overcloud image. Procedure Copy the inject-trust-anchor-hiera.yaml environment file from the heat template collection: Edit this file and make the following changes for these parameters: CAMap Lists each certificate authority content (CA) to inject into the overcloud. The overcloud requires the CA files used to sign the certificates for both the undercloud and the overcloud. Copy the contents of the root certificate authority file ( ca.crt.pem ) into an entry. For example, your CAMap parameter might look like the following: Important The certificate authority contents require the same indentation level for all new lines. You can also inject additional CAs with the CAMap parameter. 14.9. Configuring DNS endpoints If you use a DNS hostname to access the overcloud through SSL/TLS, copy the /usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yaml file into the /home/stack/templates directory. Note It is not possible to redeploy with a TLS-everywhere architecture if this environment file is not included in the initial deployment. Configure the host and domain names for all fields, adding parameters for custom networks if needed: CloudDomain the DNS domain for hosts. CloudName The DNS hostname of the overcloud endpoints. CloudNameCtlplane The DNS name of the provisioning network endpoint. CloudNameInternal The DNS name of the Internal API endpoint. CloudNameStorage The DNS name of the storage endpoint. CloudNameStorageManagement The DNS name of the storage management endpoint. DnsServers A list of DNS servers that you want to use. The configured DNS servers must contain an entry for the configured CloudName that matches the IP address of the Public API. Procedure Add a list of DNS servers to use under parameter defaults, in either a new or existing environment file: Tip You can use the CloudName{network.name} definition to set the DNS name for an API endpoint on a composable network that uses a virtual IP. For more information, see Adding a composable network . 14.10. Adding environment files during overcloud creation Use the -e option with the deployment command openstack overcloud deploy to include environment files in the deployment process. Add the environment files from this section in the following order: The environment file to enable SSL/TLS ( enable-tls.yaml ) The environment file to set the DNS hostname ( custom-domain.yaml ) The environment file to inject the root certificate authority ( inject-trust-anchor-hiera.yaml ) The environment file to set the public endpoint mapping: If you use a DNS name for accessing the public endpoints, use /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml If you use a IP address for accessing the public endpoints, use /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml Procedure Use the following deployment command snippet as an example of how to include your SSL/TLS environment files: 14.11. Manually Updating SSL/TLS Certificates Complete the following steps if you are using your own SSL/TLS certificates that are not auto-generated from the TLS everywhere (TLS-e) process. Procedure Edit your heat templates with the following content: Edit the enable-tls.yaml file and update the SSLCertificate , SSLKey , and SSLIntermediateCertificate parameters. If your certificate authority has changed, edit the inject-trust-anchor-hiera.yaml file and update the CAMap parameter. Rerun the deployment command: | [
"sudo mkdir -p /etc/pki/CA sudo touch /etc/pki/CA/index.txt",
"echo '1000' | sudo tee /etc/pki/CA/serial",
"openssl genrsa -out ca.key.pem 4096 openssl req -key ca.key.pem -new -x509 -days 7300 -extensions v3_ca -out ca.crt.pem",
"parameter_defaults: PublicTLSCAFile: /etc/pki/ca-trust/source/anchors/cacert.pem",
"sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust extract",
"openssl genrsa -out server.key.pem 2048",
"cp /etc/pki/tls/openssl.cnf .",
"[req] distinguished_name = req_distinguished_name req_extensions = v3_req [req_distinguished_name] countryName = Country Name (2 letter code) countryName_default = AU stateOrProvinceName = State or Province Name (full name) stateOrProvinceName_default = Queensland localityName = Locality Name (eg, city) localityName_default = Brisbane organizationalUnitName = Organizational Unit Name (eg, section) organizationalUnitName_default = Red Hat commonName = Common Name commonName_default = 192.168.0.1 commonName_max = 64 Extensions to add to a certificate request basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] IP.1 = 192.168.0.1 DNS.1 = instack.localdomain DNS.2 = vip.localdomain DNS.3 = 192.168.0.1",
"openssl req -config openssl.cnf -key server.key.pem -new -out server.csr.pem",
"sudo mkdir -p /etc/pki/CA/newcerts",
"sudo openssl ca -config openssl.cnf -extensions v3_req -days 3650 -in server.csr.pem -out server.crt.pem -cert ca.crt.pem -keyfile ca.key.pem",
"cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml ~/templates/.",
"parameter_defaults: SSLCertificate: | -----BEGIN CERTIFICATE----- MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGS sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQ -----END CERTIFICATE-----",
"parameter_defaults: SSLIntermediateCertificate: | -----BEGIN CERTIFICATE----- sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvpBCwUAMFgxCzAJB MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQE -----END CERTIFICATE-----",
"parameter_defaults: SSLKey: | -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAqVw8lnQ9RbeI1EdLN5PJP0lVO ctlKn3rAAdyumi4JDjESAXHIKFjJNOLrBmpQyES4X -----END RSA PRIVATE KEY-----",
"cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor-hiera.yaml ~/templates/.",
"parameter_defaults: CAMap: undercloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCS BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBw UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBA -----END CERTIFICATE----- overcloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDBzCCAe+gAwIBAgIJAIc75A7FD++DMA0GCS BAMMD3d3dy5leGFtcGxlLmNvbTAeFw0xOTAxMz Um54yGCARyp3LpkxvyfMXX1DokpS1uKi7s6CkF -----END CERTIFICATE-----",
"parameter_defaults: DnsServers: [\"10.0.0.254\"] .",
"openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/custom-domain.yaml -e ~/templates/inject-trust-anchor-hiera.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml",
"openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/custom-domain.yaml -e ~/templates/inject-trust-anchor-hiera.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/advanced_overcloud_customization/assembly_enabling-ssl-tls-on-overcloud-public-endpoints |
5.10. Additional Resources | 5.10. Additional Resources This section includes various resources that can be used to learn more about storage technologies and the Red Hat Enterprise Linux-specific subject matter discussed in this chapter. 5.10.1. Installed Documentation The following resources are installed in the course of a typical Red Hat Enterprise Linux installation, and can help you learn more about the subject matter discussed in this chapter. exports(5) man page -- Learn about the NFS configuration file format. fstab(5) man page -- Learn about the file system information configuration file format. swapoff(8) man page -- Learn how to disable swap partitions. df(1) man page -- Learn how to display disk space usage on mounted file systems. fdisk(8) man page -- Learn about this partition table maintenance utility program. mkfs(8) , mke2fs(8) man pages -- Learn about these file system creation utility programs. badblocks(8) man page -- Learn how to test a device for bad blocks. quotacheck(8) man page -- Learn how to verify block and inode usage for users and groups and optionally creates disk quota files. edquota(8) man page -- Learn about this disk quota maintenance utility program. repquota(8) man page -- Learn about this disk quota reporting utility program. raidtab(5) man page -- Learn about the software RAID configuration file format. mdadm(8) man page -- Learn about this software RAID array management utility program. lvm(8) man page -- Learn about Logical Volume Management. devlabel(8) man page -- Learn about persistent storage device access. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-storage-addres |
Compiling your Red Hat build of Quarkus applications to native executables | Compiling your Red Hat build of Quarkus applications to native executables Red Hat build of Quarkus 3.8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/compiling_your_red_hat_build_of_quarkus_applications_to_native_executables/index |
Using the AMQ JavaScript Client | Using the AMQ JavaScript Client Red Hat AMQ 2020.Q4 For Use with AMQ Clients 2.8 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_javascript_client/index |
Web console | Web console OpenShift Container Platform 4.13 Getting started with the web console in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/web_console/index |
13.4. Registering a Job Module | 13.4. Registering a Job Module Custom job plug-ins can be registered through the Certificate Manager Console. Registering a new module involves specifying the name of the module and the full name of the Java TM class that implements the module. To register a new job module: Create the custom job class. For this example, the custom job plug-in is called MyJob.java . Compile the new class. Create a directory in the CA's WEB-INF web directory to hold the custom classes, so that the CA can access them. Copy the new plug-in files into the new classes directory, and set the owner to the Certificate System system user ( pkiuser ). Register the plug-in. Log into the Certificate Manager Console. In the Configuration tab, select Job Scheduler in the left navigation tree. Select Jobs . The Job Instance tab opens, which lists any currently configured jobs. Select the Job Plugin Registration tab. Click Register to add the new module. In the Register Job Scheduler Plugin Implementation window, supply the following information: Plugin name. Type a name for the plug-in module. Class name. Type the full name of the class for this module; this is the path to the implementing Java TM class. If this class is part of a package, include the package name. For example, to register a class named customJob that is in a package named com.customplugins , type com.customplugins.customJob . Click OK . Note It is also possible to delete job modules, but this is not recommended. If it is necessary to delete a module, open the Job Plugin Registration tab as when registering a new module, select the module to delete, and click Delete . When prompted, confirm the deletion. Note pkiconsole is being deprecated. | [
"javac -d . -classpath USDCLASSPATH MyJob.java",
"mkdir /var/lib/pki/ instance_name /ca/webapps/ca/WEB-INF/classes",
"cp -pr com /var/lib/pki/ instance_name /ca/webapps/ca/WEB-INF/classes chown -R pkiuser:pkiuser /var/lib/pki/ instance_name /ca/webapps/ca/WEB-INF/classes",
"pkiconsole https://server.example.com:8443/ca"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/registering_or_deleting_a_job_module |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/making-open-source-more-inclusive |
Chapter 1. Red Hat Certified, validated, and Ansible Galaxy content in automation hub | Chapter 1. Red Hat Certified, validated, and Ansible Galaxy content in automation hub Ansible Certified Content Collections are included in your subscription to Red Hat Ansible Automation Platform. Using Ansible automation hub, you can access and curate a unique set of collections from all forms of Ansible content. Red Hat Ansible content contains two types of content: Ansible Certified Content Collections Ansible validated content collections You can use both Ansible Certified Content Collections or Ansible validated content collections to build your automation library. For more information on the differences between Ansible Certified Content Collections and Ansible validated content collections, see the Knowledgebase article Ansible Certified Content Collections and Ansible validated content , or Ansible validated content in this guide. You can update these collections manually by downloading their packages. You can use Ansible automation hub to distribute the relevant Red Hat Ansible Certified Content Collections to your users by creating a requirements file or a synclist. Use a requirements file to install collections to your automation hub, as synclists can only be managed by users with platform administrator privileges. Before you can use a requirements file to install content, you must: Obtain an automation hub API token Use the API token to configure a remote repository in your local hub Then, Create a requirements file . 1.1. Configuring Ansible automation hub remote repositories to synchronize content Use remote configurations to configure your private automation hub to synchronize with Ansible Certified Content Collections hosted on console.redhat.com or with your collections in Ansible Galaxy. Important To synchronize content, you can now upload a manually-created requirements file from the rh-certified remote. Remotes are configurations that allow you to synchronize content to your custom repositories from an external collection source. As of the 2.4 release you can still synchronize content, but synclists are deprecated, and will be removed in a future version. Each remote configuration located in Automation Content Remotes provides information for both the community and rh-certified repository about when the repository was last updated . You can add new content to Ansible automation hub at any time using the Edit and Sync features included on the Automation Content Repositories page. What's the difference between Ansible Galaxy and Ansible automation hub? Collections published to Ansible Galaxy are the latest content published by the Ansible community and have no joint support claims associated with them. Ansible Galaxy is the recommended frontend directory for the Ansible community to access content. Collections published to Ansible automation hub are targeted to joint customers of Red Hat and selected partners. Customers need an Ansible subscription to access and download collections on Ansible automation hub. A certified collection means that Red Hat and partners have a strategic relationship in place and are ready to support joint customers, and that the collections may have had additional testing and validation done against them. How do I request a namespace on Ansible Galaxy? To request a namespace through an Ansible Galaxy GitHub issue, follow these steps: Send an email to [email protected] Include the GitHub username used to sign up on Ansible Galaxy. You must have logged in at least once for the system to validate. After users are added as administrators of the namespace, you can use the self-serve process to add more administrators. Are there any restrictions for Ansible Galaxy namespace naming? Collection namespaces must follow Python module name convention. This means collections should have short, all lowercase names. You can use underscores in the collection name if it improves readability. 1.1.1. Token management in automation hub Before you can interact with automation hub by uploading or downloading collections, you must create an API token. The automation hub API token authenticates your ansible-galaxy client to the Red Hat automation hub server. Note automation hub does not support basic authentication or authenticating through service accounts. You must authenticate using token management. Your method for creating the API token differs according to the type of automation hub that you are using: Automation hub uses offline token management. See Creating the offline token in automation hub . Private automation hub uses API token management. See Creating the API token in private automation hub . If you are using Keycloak to authenticate your private automation hub, follow the procedure for Creating the offline token in automation hub . 1.1.1.1. Creating the offline token in automation hub In automation hub, you can create an offline token by using Token management . The offline token is a secret token used to protect your content. Procedure Navigate to Ansible Automation Platform on the Red Hat Hybrid Cloud Console . From the navigation panel, select Automation Hub Connect to Hub . Under Offline token , click Load Token . Click the Copy to clipboard icon to copy the offline token. Paste the API token into a file and store in a secure location. Important The offline token is a secret token used to protect your content. Store your token in a secure location. The offline token is now available for configuring automation hub as your default collections server or for uploading collections by using the ansible-galaxy command line tool. Note Your offline token expires after 30 days of inactivity. For more on obtaining a new offline token, see Keeping your offline token active . 1.1.1.2. Creating the API token in private automation hub In private automation hub, you can create an API token using API token management. The API token is a secret token used to protect your content. Prerequisites Valid subscription credentials for Red Hat Ansible Automation Platform. Procedure Log in to your private automation hub. From the navigation panel, select Automation Content API token . Click Load Token . To copy the API token, click the Copy to clipboard icon. Paste the API token into a file and store in a secure location. Important The API token is a secret token used to protect your content. Store your API token in a secure location. The API token is now available for configuring automation hub as your default collections server or uploading collections using the ansible-galaxy command line tool. Note The API token does not expire. 1.1.1.3. Keeping your offline token active Offline tokens expire after 30 days of inactivity. You can keep your offline token from expiring by periodically refreshing your offline token. Keeping an online token active is useful when an application performs an action on behalf of the user; for example, this allows the application to perform a routine data backup when the user is offline. Note If your offline token expires, you must obtain a new one . Procedure Run the following command to prevent your token from expiring: curl https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token -d grant_type=refresh_token -d client_id="cloud-services" -d refresh_token="{{ user_token }}" --fail --silent --show-error --output /dev/null 1.1.2. Configuring the rh-certified remote repository and synchronizing Red Hat Ansible Certified Content Collection You can edit the rh-certified remote repository to synchronize collections from automation hub hosted on console.redhat.com to your private automation hub. By default, your private automation hub rh-certified repository includes the URL for the entire group of Ansible Certified Content Collections. To use only those collections specified by your organization, a private automation hub administrator can upload manually-created requirements files from the rh-certified remote. If you have collections A , B , and C in your requirements file, and a new collection X is added to console.redhat.com that you want to use, you must add X to your requirements file for private automation hub to synchronize it. Prerequisites You have valid Modify Ansible repo content permissions. For more information on permissions, see Access management and authentication . You have retrieved the Sync URL and API Token from the automation hub hosted service on console.redhat.com. You have configured access to port 443. This is required for synchronizing certified collections. For more information, see the automation hub table in the Network ports and protocols chapter of Planning your installation. Procedure Log in to your Ansible Automation Platform. From the navigation panel, select Automation Content Remotes . In the rh-certified remote repository, click Edit remote . In the URL field, paste the Sync URL . In the Token field, paste the token you acquired from console.redhat.com. Click Save remote . You can now synchronize collections between your organization synclist on console.redhat.com and your private automation hub. From the navigation panel, select Automation Content Repositories . to rh-certified click the More Actions icon ... and select Sync repository . On the modal that appears, you can toggle the following options: Mirror : Select if you want your repository content to mirror the remote repository's content. Optimize : Select if you want to sync only when no changes are reported by the remote server. Click Sync to complete the sync. Verification The Sync status column updates to notify you whether the Red Hat Certified Content Collections synchronization is successful. Navigate to Automation Content Collections to confirm that your collections content has synchronized successfully. 1.1.3. Configuring the community remote repository and syncing Ansible Galaxy collections You can edit the community remote repository to synchronize chosen collections from Ansible Galaxy to your private automation hub. By default, your private automation hub community repository directs to galaxy.ansible.com/api/ . Prerequisites You have Modify Ansible repo content permissions. For more information on permissions, see Access management and authentication . You have a requirements.yml file that identifies those collections to synchronize from Ansible Galaxy as in the following example: Requirements.yml example Procedure Log in to Ansible Automation Platform. From the navigation panel, select Automation Content Remotes . In the Details tab in the Community remote, click Edit remote . In the YAML requirements field, paste the contents of your requirements.yml file. Click Save remote . You can now synchronize collections identified in your requirements.yml file from Ansible Galaxy to your private automation hub. From the navigation panel, select Automation Content Repositories . to the community repository, click the More Actions icon ... and select Sync repository to sync collections between Ansible Galaxy and Ansible automation hub. On the modal that appears, you can toggle the following options: Mirror : Select if you want your repository content to mirror the remote repository's content. Optimize : Select if you want to sync only when no changes are reported by the remote server. Click Sync to complete the sync. Verification The Sync status column updates to notify you whether the Ansible Galaxy collections synchronization to your Ansible automation hub is successful. Navigate to Automation Content Collections and select Community to confirm successful synchronization. 1.1.4. Configuring proxy settings If your private automation hub is behind a network proxy, you can configure proxy settings on the remote to sync content located outside of your local network. Prerequisites You have valid Modify Ansible repo content permissions. For more information on permissions, see Access management and authentication You have a proxy URL and credentials from your local network administrator. Procedure Log in to Ansible Automation Platform. From the navigation panel, select Automation Content Remotes . In either the rh-certified or Community remote, click the More Actions icon ... and select Edit remote . Expand the Show advanced options drop-down menu. Enter your proxy URL, proxy username, and proxy password in the appropriate fields. Click Save remote . 1.1.5. Creating a requirements file Use a requirements file to add collections to your automation hub. Requirements files are in YAML format and list the collections that you want to install in your automation hub. After you create your requirements.yml file listing the collections you want to install, you will then run the install command to add the collections to your hub instance. A standard requirements.yml file contains the following parameters: name : the name of the collection formatted as <namespace>.<collection_name> version : the collection version number Procedure Create your requirements file. In YAML format, collection information in your requirements file should look like this: collections: name: namespace.collection_name version: 1.0.0 After you have created your requirements file listing information for each collection that you want to install, navigate to the directory where the file is located and run the following command: USD ansible-galaxy collection install -r requirements.yml 1.1.5.1. Installing an individual collection from the command line To install an individual collection to your automation hub, run the following command: USD ansible-galaxy collection install namespace.collection_name 1.2. Synchronizing Ansible Content Collections in automation hub Important To synchronize content, you can now upload a manually-created requirements file from the rh-certified remote. Remotes are configurations that enable you to synchronize content to your custom repositories from an external collection source. As of the 2.4 release you can still synchronize content, but synclists are deprecated, and will be removed in a future version. 1.2.1. Explanation of Red Hat Ansible Certified Content Collections synclists A synclist is a curated group of Red Hat Certified Collections assembled by your organization administrator. It synchronizes with your local Ansible automation hub. Use synclists to manage only the content that you want and exclude unnecessary collections. Design and manage your synclist from the content available as part of Red Hat content on console.redhat.com Each synclist has its own unique repository URL that you can designate as a remote source for content in automation hub. You securely access each synclist by using an API token. 1.2.2. Creating a synclist of Red Hat Ansible Certified Content Collections You can create a synclist of curated Red Hat Ansible Certified Content in Ansible automation hub on console.redhat.com. Your synclist repository is located on the automation hub navigation panel under Automation Content Repositories , which is updated whenever you manage content within Ansible Certified Content Collections. All Ansible Certified Content Collections are included by default in your initial organization synclist. Prerequisites You have a valid Ansible Automation Platform subscription. You have organization administrator permissions for console.redhat.com. The following domain names are part of either the firewall or the proxy's allowlist. They are required for successful connection and download of collections from automation hub or Galaxy server: galaxy.ansible.com cloud.redhat.com console.redhat.com sso.redhat.com Ansible automation hub resources are stored in Amazon Simple Storage. The following domain names must be in the allow list: automation-hub-prd.s3.us-east-2.amazonaws.com ansible-galaxy.s3.amazonaws.com SSL inspection is disabled either when using self signed certificates or for the Red Hat domains. Procedure Log in to console.redhat.com . Navigate to Automation Hub Collections . Set the Sync toggle switch on each collection to exclude or include it on your synclist. Note You will only see the Sync toggle switch if you have administrator permissions. To initiate the remote repository synchronization, navigate to your Ansible Automation Platform and select Automation Content Repositories . In the row containing the repository you want to sync, click the More Actions icon ... and select Sync repository to initiate the remote repository synchronization to your private automation hub. Optional: If your remote repository is already configured, update the collections content that you made available to local users by manually synchronizing Red Hat Ansible Certified Content Collections to your private automation hub. 1.3. Collections and content signing in private automation hub As an automation administrator for your organization, you can configure private automation hub for signing and publishing Ansible content collections from different groups within your organization. For additional security, automation creators can configure Ansible-Galaxy CLI to verify these collections to ensure that they have not been changed after they were uploaded to automation hub. 1.3.1. Configuring content signing on private automation hub To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing. Prerequisites Your GnuPG key pairs have been securely set up and managed by your organization. Your public-private key pair has proper access for configuring content signing on private automation hub. Procedure Create a signing script that accepts only a filename. Note This script acts as the signing service and must generate an ascii-armored detached gpg signature for that file using the key specified through the PULP_SIGNING_KEY_FINGERPRINT environment variable. The script prints out a JSON structure with the following format. {"file": "filename", "signature": "filename.asc"} All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature. Example: The following script produces signatures for content: #!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH="USD1.asc" ADMIN_ID="USDPULP_SIGNING_KEY_FINGERPRINT" PASSWORD="password" # Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase \ USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID \ --armor --output USDSIGNATURE_PATH USDFILE_PATH # Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\"file\": \"USDFILE_PATH\", \"signature\": \"USDSIGNATURE_PATH\"} else exit USDSTATUS fi After you deploy a private automation hub with signing enabled to your Ansible Automation Platform cluster, new UI additions are displayed in collections. Review the Ansible Automation Platform installer inventory file for options that begin with automationhub_* . [all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh The two new keys ( automationhub_auto_sign_collections and automationhub_require_content_approval ) indicate that the collections must be signed and approved after they are uploaded to private automation hub. 1.3.2. Using content signing services in private automation hub After you have configured content signing on your private automation hub, you can manually sign a new collection or replace an existing signature with a new one. When users download a specific collection, this signature indicates that the collection is for them and has not been modified after certification. You can use content signing on private automation hub in the following scenarios: Your system does not have automatic signing configured and you must use a manual signing process to sign collections. The current signatures on the automatically configured collections are corrupted and need new signatures. You need additional signatures for previously signed content. You want to rotate signatures on your collections. Procedure Log in to Ansible Automation Platform. From the navigation panel, select Automation Content Collection Approvals . The Approval dashboard opens and displays a list of collections. Click the thumbs up icon to the collection you want to approve. On the modal that appears, check the box confirming that you want to approve the collection, and click Approve and sign collections . Verification Navigate to Automation Content Collections to verify that the collections you signed and approved are displayed. 1.3.3. Downloading signature public keys After you sign and approve collections, download the signature public keys from the Ansible Automation Platform UI. You must download the public key before you add it to the local system keyring. Procedure Log in to Ansible Automation Platform. From the navigation panel, select Automation Content Signature Keys . The Signature Keys dashboard displays a list of multiple keys: collections and container images. To verify collections, download the key prefixed with collections- . To verify container images, download the key prefixed with container- . Choose one of the following methods to download your public key: Click the Download Key icon to download the public key. Click the Copy to clipboard to the public key you want to copy. Use the public key that you copied to verify the content collection that you are installing. 1.3.4. Configuring Ansible-Galaxy CLI to verify collections You can configure Ansible-Galaxy CLI to verify collections. This ensures that downloaded collections are approved by your organization and have not been changed after they were uploaded to automation hub. If a collection has been signed by automation hub, the server provides ASCII armored, GPG-detached signatures to verify the authenticity of MANIFEST.json before using it to verify the collection's contents. You must opt into signature verification by configuring a keyring for ansible-galaxy or providing the path with the --keyring option. Prerequisites Signed collections are available in automation hub to verify signature. Certified collections can be signed by approved roles within your organization. Public key for verification has been added to the local system keyring. Procedure To import a public key into a non-default keyring for use with ansible-galaxy , run the following command. gpg --import --no-default-keyring --keyring ~/.ansible/pubring.kbx my-public-key.asc Note In addition to any signatures provided by automation hub, signature sources can also be provided in the requirements file and on the command line. Signature sources should be URIs. To verify the collection name provided on the CLI with an additional signature, run the following command: ansible-galaxy collection install namespace.collection --signature https://examplehost.com/detached_signature.asc --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx You can use this option multiple times to provide multiple signatures. Confirm that the collections in a requirements file list any additional signature sources following the collection's signatures key, as in the following example. # requirements.yml collections: - name: ns.coll version: 1.0.0 signatures: - https://examplehost.com/detached_signature.asc - file:///path/to/local/detached_signature.asc ansible-galaxy collection verify -r requirements.yml --keyring ~/.ansible/pubring.kbx When you install a collection from automation hub, the signatures provided by the server are saved along with the installed collections to verify the collection's authenticity. (Optional) If you need to verify the internal consistency of your collection again without querying the Ansible Galaxy server, run the same command you used previously using the --offline option. Are there any recommendations for collection naming? Create a collection with company_name.product format. This format means that multiple products can have different collections under the company namespace. How do I get a namespace on automation hub? By default namespaces used on Ansible Galaxy are also used on automation hub by the Ansible partner team. For any queries and clarifications contact [email protected] . 1.4. Ansible validated content Red Hat Ansible Automation Platform includes Ansible validated content, which complements existing Red Hat Ansible Certified Content. Ansible validated content provides an expert-led path for performing operational tasks on a variety of platforms from both Red Hat and our trusted partners. 1.4.1. Configuring validated collections with the installer When you download and run the RPM bundle installer, certified and validated collections are automatically uploaded. Certified collections are uploaded into the rh-certified repository. Validated collections are uploaded into the validated repository. You can change the default configuration by using two variables: automationhub_seed_collections is a boolean that defines whether or not preloading is enabled. automationhub_collection_seed_repository`is a variable that enables you to specify the type of content to upload when it is set to `true . Possible values are certified or validated . If this variable is missing, both content sets will be uploaded. Note Changing the default configuration may require further platform configuration changes for other content you may use. | [
"curl https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token -d grant_type=refresh_token -d client_id=\"cloud-services\" -d refresh_token=\"{{ user_token }}\" --fail --silent --show-error --output /dev/null",
"collections: # Install a collection from Ansible Galaxy. - name: community.aws version: 5.2.0 source: https://galaxy.ansible.com",
"collections: name: namespace.collection_name version: 1.0.0",
"ansible-galaxy collection install -r requirements.yml",
"ansible-galaxy collection install namespace.collection_name",
"{\"file\": \"filename\", \"signature\": \"filename.asc\"}",
"#!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH=\"USD1.asc\" ADMIN_ID=\"USDPULP_SIGNING_KEY_FINGERPRINT\" PASSWORD=\"password\" Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID --armor --output USDSIGNATURE_PATH USDFILE_PATH Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\\\"file\\\": \\\"USDFILE_PATH\\\", \\\"signature\\\": \\\"USDSIGNATURE_PATH\\\"} else exit USDSTATUS fi",
"[all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh",
"gpg --import --no-default-keyring --keyring ~/.ansible/pubring.kbx my-public-key.asc",
"ansible-galaxy collection install namespace.collection --signature https://examplehost.com/detached_signature.asc --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx",
"requirements.yml collections: - name: ns.coll version: 1.0.0 signatures: - https://examplehost.com/detached_signature.asc - file:///path/to/local/detached_signature.asc ansible-galaxy collection verify -r requirements.yml --keyring ~/.ansible/pubring.kbx"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/managing_automation_content/managing-cert-valid-content |
Chapter 22. Creating nested virtual machines | Chapter 22. Creating nested virtual machines You can use nested virtual machines (VMs) if you require a different host operating system than what your local host is running. This eliminates the need for additional physical hardware. Warning In most environments, nested virtualization is only available as a Technology Preview in RHEL 9. For detailed descriptions of the supported and unsupported environments, see Support limitations for nested virtualization . 22.1. What is nested virtualization? With nested virtualization, you can run virtual machines (VMs) within other VMs. A standard VM that runs on a physical host can also act as a second hypervisor and create its own VMs. Nested virtualization terminology Level 0 ( L0 ) A physical host, a bare-metal machine. Level 1 ( L1 ) A standard VM, running on an L0 physical host, that can act as an additional virtual host. Level 2 ( L2 ) A nested VM running on an L1 virtual host. Important: The second level of virtualization severely limits the performance of an L2 VM. For this reason, nested virtualization is primarily intended for development and testing scenarios, such as: Debugging hypervisors in a constrained environment Testing larger virtual deployments on a limited amount of physical resources Warning In most environments, nested virtualization is only available as a Technology Preview in RHEL 9. For detailed descriptions of the supported and unsupported environments, see Support limitations for nested virtualization . Additional resources Support limitations for nested virtualization 22.2. Support limitations for nested virtualization In most environments, nested virtualization is only available as a Technology Preview in RHEL 9. However, you can use a Windows virtual machine (VM) with the Windows Subsystem for Linux (WSL2) to create a virtual Linux environment inside the Windows VM. This use case is fully supported on RHEL 9 under specific conditions. To learn more about the relevant terminology for nested virtualization, see What is nested virtualization? Supported environments To create a supported deployment of nested virtualization, create an L1 Windows VM on a RHEL 9 L0 host and use WSL2 to create a virtual Linux environment inside the L1 Windows VM. Currently, this is the only supported nested environment. Important The L0 host must be an Intel or AMD system. Other architectures, such as ARM or IBM Z, are currently not supported. You must use only the following operating system versions: On the L0 host : On the L1 VMs : RHEL 9.2 and later Windows Server 2019 with WSL2 Windows Server 2022 with WSL2 Windows 10 with WSL2 Windows 11 with WSL2 See Microsoft documentation for instructions on installing WSL2 and choosing supported Linux distributions. To create a supported nested environment, use one of the following procedures: Creating a nested virtual machine on Intel Creating a nested virtual machine on AMD Technology Preview environments These nested environments are available only as a Technology Preview and are not supported. Important The L0 host must be an Intel, AMD, or IBM Z system. Nested virtualization currently does not work on other architectures, such as ARM. You must use only the following operating system versions: On the L0 host : On the L1 VMs : On the L2 VMs : RHEL 9.2 and later RHEL 8.8 and later RHEL 8.8 and later RHEL 9.2 and later RHEL 9.2 and later Windows Server 2016 with Hyper-V Windows Server 2019 Windows Server 2019 with Hyper-V Windows Server 2022 Windows Server 2022 with Hyper-V Windows 10 with Hyper-V Windows 11 with Hyper-V Note Creating RHEL L1 VMs is not tested when used in other Red Hat virtualization offerings. These include: Red Hat Virtualization Red Hat OpenStack Platform OpenShift Virtualization To create a Technology Preview nested environment, use one of the following procedures: Creating a nested virtual machine on Intel Creating a nested virtual machine on AMD Creating a nested virtual machine on IBM Z Hypervisor limitations Currently, Red Hat tests nesting only on RHEL-KVM. When RHEL is used as the L0 hypervisor, you can use RHEL or Windows as the L1 hypervisor. When using an L1 RHEL VM on a non-KVM L0 hypervisor, such as VMware ESXi or Amazon Web Services (AWS), creating L2 VMs in the RHEL guest operating system has not been tested and might not work. Feature limitations Use of L2 VMs as hypervisors and creating L3 guests has not been properly tested and is not expected to work. Migrating VMs currently does not work on AMD systems if nested virtualization has been enabled on the L0 host. On an IBM Z system, huge-page backing storage and nested virtualization cannot be used at the same time. Some features available on the L0 host might be unavailable for the L1 hypervisor. Additional resources What is Windows Subsystem for Linux? Creating a nested virtual machine on Intel Creating a nested virtual machine on AMD Creating a nested virtual machine on IBM Z 22.3. Creating a nested virtual machine on Intel Follow the steps below to enable and configure nested virtualization on an Intel host. Warning In most environments, nested virtualization is only available as a Technology Preview in RHEL 9. For detailed descriptions of the supported and unsupported environments, see Support limitations for nested virtualization . Prerequisites An L0 RHEL 9 host running an L1 virtual machine (VM). The hypervisor CPU must support nested virtualization. To verify, use the cat /proc/cpuinfo command on the L0 hypervisor. If the output of the command includes the vmx and ept flags, creating L2 VMs is possible. This is generally the case on Intel Xeon v3 cores and later. Ensure that nested virtualization is enabled on the L0 host: If the command returns 1 or Y , the feature is enabled. Skip the remaining prerequisite steps, and continue with the Procedure section. If the command returns 0 or N but your system supports nested virtualization, use the following steps to enable the feature. Unload the kvm_intel module: Activate the nesting feature: The nesting feature is now enabled, but only until the reboot of the L0 host. To enable it permanently, add the following line to the /etc/modprobe.d/kvm.conf file: Procedure Configure your L1 VM for nested virtualization. Open the XML configuration of the VM. The following example opens the configuration of the Intel-L1 VM: Configure the VM to use host-passthrough CPU mode by editing the <cpu> element: If you require the VM to use a specific CPU model, configure the VM to use custom CPU mode. Inside the <cpu> element, add a <feature policy='require' name='vmx'/> element and a <model> element with the CPU model specified inside. For example: Create an L2 VM within the L1 VM. To do this, follow the same procedure as when creating the L1 VM . 22.4. Creating a nested virtual machine on AMD Follow the steps below to enable and configure nested virtualization on an AMD host. Warning In most environments, nested virtualization is only available as a Technology Preview in RHEL 9. For detailed descriptions of the supported and unsupported environments, see Support limitations for nested virtualization . Prerequisites An L0 RHEL 9 host running an L1 virtual machine (VM). The hypervisor CPU must support nested virtualization. To verify, use the cat /proc/cpuinfo command on the L0 hypervisor. If the output of the command includes the svm and npt flags, creating L2 VMs is possible. This is generally the case on AMD EPYC cores and later. Ensure that nested virtualization is enabled on the L0 host: If the command returns 1 or Y , the feature is enabled. Skip the remaining prerequisite steps, and continue with the Procedure section. If the command returns 0 or N , use the following steps to enable the feature. Stop all running VMs on the L0 host. Unload the kvm_amd module: Activate the nesting feature: The nesting feature is now enabled, but only until the reboot of the L0 host. To enable it permanently, add the following to the /etc/modprobe.d/kvm.conf file: Procedure Configure your L1 VM for nested virtualization. Open the XML configuration of the VM. The following example opens the configuration of the AMD-L1 VM: Configure the VM to use host-passthrough CPU mode by editing the <cpu> element: If you require the VM to use a specific CPU model, configure the VM to use custom CPU mode. Inside the <cpu> element, add a <feature policy='require' name='svm'/> element and a <model> element with the CPU model specified inside. For example: Create an L2 VM within the L1 VM. To do this, follow the same procedure as when creating the L1 VM . 22.5. Creating a nested virtual machine on IBM Z Follow the steps below to enable and configure nested virtualization on an IBM Z host. Note IBM Z does not really provide a bare-metal L0 host. Instead, user systems are set up on a logical partition (LPAR), which is already a virtualized system, so it is often referred to as L1 . However, for better alignment with other architectures in this guide, the following steps refer to IBM Z as if it provides an L0 host. To learn more about nested virtualization, see: What is nested virtualization? Warning In most environments, nested virtualization is only available as a Technology Preview in RHEL 9. For detailed descriptions of the supported and unsupported environments, see Support limitations for nested virtualization . Prerequisites An L0 RHEL 9 host running an L1 virtual machine (VM). The hypervisor CPU must support nested virtualization. To verify this is the case, use the cat /proc/cpuinfo command on the L0 hypervisor. If the output of the command includes the sie flag, creating L2 VMs is possible. Ensure that nested virtualization is enabled on the L0 host: If the command returns 1 or Y , the feature is enabled. Skip the remaining prerequisite steps, and continue with the Procedure section. If the command returns 0 or N , use the following steps to enable the feature. Stop all running VMs on the L0 host. Unload the kvm module: Activate the nesting feature: The nesting feature is now enabled, but only until the reboot of the L0 host. To enable it permanently, add the following line to the /etc/modprobe.d/kvm.conf file: Procedure Create an L2 VM within the L1 VM. To do this, follow the same procedure as when creating the L1 VM . | [
"modprobe kvm hpage=1 nested=1 modprobe: ERROR: could not insert 'kvm': Invalid argument dmesg |tail -1 [90226.508366] kvm-s390: A KVM host that supports nesting cannot back its KVM guests with huge pages",
"cat /sys/module/kvm_intel/parameters/nested",
"modprobe -r kvm_intel",
"modprobe kvm_intel nested=1",
"options kvm_intel nested=1",
"virsh edit Intel-L1",
"<cpu mode='host-passthrough' />",
"<cpu mode ='custom' match ='exact' check='partial'> <model fallback='allow'> Haswell-noTSX </model> <feature policy='require' name='vmx'/> </cpu>",
"cat /sys/module/kvm_amd/parameters/nested",
"modprobe -r kvm_amd",
"modprobe kvm_amd nested=1",
"options kvm_amd nested=1",
"virsh edit AMD-L1",
"<cpu mode='host-passthrough' />",
"<cpu mode=\"custom\" match=\"exact\" check=\"none\"> <model fallback=\"allow\"> EPYC-IBPB </model> <feature policy=\"require\" name=\"svm\"/> </cpu>",
"cat /sys/module/kvm/parameters/nested",
"modprobe -r kvm",
"modprobe kvm nested=1",
"options kvm nested=1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/creating-nested-virtual-machines_configuring-and-managing-virtualization |
Installing | Installing Red Hat Enterprise Linux AI 1.3 Installation documentation on various platforms Red Hat RHEL AI Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/installing/index |
Migration Guide | Migration Guide Red Hat build of Keycloak 26.0 Red Hat Customer Content Services | [
"<datasource jndi-name=\"java:jboss/datasources/KeycloakDS\" pool-name=\"KeycloakDS\" enabled=\"true\" use-java-context=\"true\" statistics-enabled=\"true\"> <connection-url>jdbc:postgresql://mypostgres:5432/mydb?currentSchema=myschema</connection-url> <driver>postgresql</driver> <pool> <min-pool-size>5</min-pool-size> <max-pool-size>50</max-pool-size> </pool> <security> <user-name>myuser</user-name> <password>myuser</password> </security> </datasource>",
"kc.sh start --db postgres --db-url-host mypostgres --db-url-port 5432 --db-url-database mydb --db-schema myschema --db-pool-min-size 5 --db-pool-max-size 50 --db-username myser --db-password myuser",
"<tls> <key-stores> <key-store name=\"applicationKS\"> <credential-reference clear-text=\"password\"/> <implementation type=\"JKS\"/> <file path=\"/path/to/application.keystore\"/> </key-store> </key-stores> <key-managers> <key-manager name=\"applicationKM\" key-store=\"applicationKS\"> <credential-reference clear-text=\"password\"/> </key-manager> </key-managers> <server-ssl-contexts> <server-ssl-context name=\"applicationSSC\" key-manager=\"applicationKM\"/> </server-ssl-contexts> </tls>",
"kc.sh start --https-key-store-file /path/to/application.keystore --https-key-store-password password",
"kc.sh start --https-certificate-file /path/to/certfile.pem --https-certificate-key-file /path/to/keyfile.pem",
"<subsystem xmlns=\"urn:jboss:domain:infinispan:13.0\"> <cache-container name=\"keycloak\" marshaller=\"JBOSS\" modules=\"org.keycloak.keycloak-model-infinispan\"> <local-cache name=\"realms\"> <heap-memory size=\"10000\"/> </local-cache> <local-cache name=\"users\"> <heap-memory size=\"10000\"/> </local-cache> <local-cache name=\"sessions\"/> <local-cache name=\"authenticationSessions\"/> <local-cache name=\"offlineSessions\"/> </cache-container> </subsystem>",
"kc.sh start --cache-config-file my-cache-file.xml",
"<jgroups> <stack name=\"my-encrypt-udp\" extends=\"udp\"> ... </stack> </jgroups> <cache-container name=\"keycloak\"> <transport stack=\"tcp\"/> ... </cache-container>",
"kc.sh start --cache-config-file my-cache-file.xml --cache-stack my-encrypt-udp",
"<spi name=\"hostname\"> <default-provider>default</default-provider> <provider name=\"default\" enabled=\"true\"> <properties> <property name=\"frontendUrl\" value=\"myFrontendUrl\"/> <property name=\"forceBackendUrlToFrontendUrl\" value=\"true\"/> </properties> </provider> </spi>",
"kc.sh start --hostname myFrontendUrl",
"kc.sh start --proxy-headers xforwarded",
"<spi name=\"truststore\"> <provider name=\"file\" enabled=\"true\"> <properties> <property name=\"file\" value=\"path/to/myTrustStore.jks\"/> <property name=\"password\" value=\"password\"/> <property name=\"hostname-verification-policy\" value=\"WILDCARD\"/> </properties> </provider> </spi>",
"keytool -importkeystore -srckeystore path/to/myTrustStore.jks -destkeystore path/to/myTrustStore.p12 -srcstoretype jks -deststoretype pkcs12 -srcstorepass password -deststorepass temp-password",
"openssl pkcs12 -in path/to/myTrustStore.p12 -out path/to/myTrustStore.pem -nodes -passin pass:temp-password",
"openssl pkcs12 -export -in path/to/myTrustStore.pem -out path/to/myUnencryptedTrustStore.p12 -nokeys -passout pass:",
"kc.sh start --truststore-paths path/to/myUnencryptedTrustStore.p12 --tls-hostname-verifier WILDCARD",
"<spi name=\"vault\"> <provider name=\"elytron-cs-keystore\" enabled=\"true\"> <properties> <property name=\"location\" value=\"path/to/keystore.p12\"/> <property name=\"secret\" value=\"password\"/> </properties> </provider> </spi>",
"kc.sh start --vault keystore --vault-file /path/to/keystore.p12 --vault-pass password",
"export JAVA_OPTS_APPEND=-XX:+HeapDumpOnOutOfMemoryError kc.sh start",
"<spi name=\"<spi-id>\"> <provider name=\"<provider-id>\" enabled=\"true\"> <properties> <property name=\"<property>\" value=\"<value>\"/> </properties> </provider> </spi>",
"spi-<spi-id>-<provider-id>-<property>=<value>",
"kc.sh start --spi-connections-jpa-legacy-migration-strategy manual",
"kc.sh start --spi-connections-jpa-legacy-migration-export <path>/<file.sql>",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org additionalOptions: - name: spi-connections-http-client-default-connection-pool-size value: 20",
"./kc.sh start --db=postgres --db-url-host=postgres-db --db-username=user --db-password=pass --https-certificate-file=mycertfile --https-certificate-key-file=myprivatekey --hostname=test.keycloak.org --spi-connections-http-client-default-connection-pool-size=20",
"apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret namespace: keycloak labels: app: sso stringData: POSTGRES_DATABASE: kc-db-name POSTGRES_EXTERNAL_ADDRESS: my-postgres-hostname POSTGRES_EXTERNAL_PORT: 5432 POSTGRES_USERNAME: user POSTGRES_PASSWORD: pass type: Opaque",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: db: vendor: postgres host: my-postgres-hostname port: 5432 usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret stringData: username: \"user\" password: \"pass\" type: Opaque",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: http: tlsSecret: example-tls-secret",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: enabled: false",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: image: quay.io/my-company/my-keycloak:latest",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: unsupported: podTemplate: metadata: labels: foo: \"bar\" spec: containers: - volumeMounts: - name: test-volume mountPath: /mnt/test volumes: - name: test-volume secret: secretName: test-secret",
"apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: example-keycloakrealm spec: instanceSelector: matchLabels: app: sso realm: id: \"basic\" realm: \"basic\" enabled: True displayName: \"Basic Realm\"",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: example-keycloakrealm spec: keycloakCRName: example-kc realm: id: \"basic\" realm: \"basic\" enabled: True displayName: \"Basic Realm\"",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: rhbk spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: sso-x509-https-secret",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: rhsso spec: replicas: 1 template: spec: volumes: - name: sso-x509-https-volume secret: secretName: sso-x509-https-secret defaultMode: 420 containers: volumeMounts: - name: sso-x509-https-volume readOnly: true env: - name: DB_SERVICE_PREFIX_MAPPING value: postgres-db=DB - name: DB_USERNAME value: username - name: DB_PASSWORD value: password",
"additionalOptions: name: proxy value: reencrypt",
"401 Unauthorized WWW-Authenticate: Bearer realm=\"myrealm\"",
"400 Bad Request WWW-Authenticate: Bearer realm=\"myrealm\", error=\"invalid_request\", error_description=\"...\"",
"403 Forbidden WWW-Authenticate: Bearer realm=\"myrealm\", error=\"insufficient_scope\", error_description=\"Missing openid scope\"",
"500 Internal Server Error",
"401 Unauthorized WWW-Authenticate: Bearer realm=\"myrealm\", error=\"invalid_token\", error_description=\"...\"",
"{ \"realm\": \"quickstart\", \"auth-server-url\": \"http://localhost:8180\", \"ssl-required\": \"external\", \"resource\": \"jakarta-servlet-authz-client\", \"credentials\": { \"secret\": \"secret\" } }",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>USD{Red Hat build of Keycloak .version}</version> </dependency>",
"@Context org.jboss.resteasy.spi.HttpRequest request; @Context org.jboss.resteasy.spi.HttpResponse response;",
"KeycloakSession session = // obtain the session, which is usually available when creating a custom provider from a factory KeycloakContext context = session.getContext(); HttpRequest request = context.getHttpRequest(); HttpResponse response = context.getHttpResponse();",
"KeycloakSession session = // obtain the session KeycloakContext context = session.getContext(); MyContextualObject myContextualObject = context.getContextObject(MyContextualObject.class);",
"@Deprecated List<GroupModel> getGroups(RealmModel realm);",
"Stream<GroupModel> getGroupsStream(RealmModel realm);",
"@Deprecated UserModel getUserById(String id, RealmModel realm);",
"UserModel getUserById(RealmModel realm, String id)",
"session .userLocalStorage() ;",
"session .users() ;",
"session .userLocalStorage() ;",
"((LegacyDatastoreProvider) session.getProvider(DatastoreProvider.class)) .userLocalStorage() ;",
"realm .getClientStorageProvidersStream() ...;",
"((LegacyRealmModel) realm) .getClientStorageProvidersStream() ...;",
"public class MyClass extends RealmModel { /* might not compile due to @Override annotations for methods no longer present in the interface RealmModel. / / ... */ }",
"public class MyClass extends LegacyRealmModel { /* ... */ }",
"session**.userCache()**.evict(realm, user);",
"UserStorageUitl.userCache(session);",
"UserCache cache = session.getProvider(UserCache.class); if (cache != null) cache.evict(realm)();",
"session.invalidate(InvalidationHandler.ObjectType.REALM, realm.getId());",
"session.userCredentialManager() .createCredential (realm, user, credentialModel)",
"user.credentialManager() .createStoredCredential (credentialModel)",
"public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } }",
"public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } @Override public SubjectCredentialManager credentialManager() { return new LegacyUserCredentialManager(session, realm, this); } }; } }",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jakarta</artifactId> <version>18.0.0.redhat-00001</version> </dependency>",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>22.0.0.redhat-00001</version> </dependency>",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>18.0.0.redhat-00001</version> </dependency>",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jee</artifactId> <version>22.0.0.redhat-00001</version> </dependency>",
"kc.sh start --spi-user-profile-declarative-user-profile-max-email-local-part-length=100"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html-single/migration_guide/migrating-providers |
Chapter 19. Mounting file systems on demand | Chapter 19. Mounting file systems on demand As a system administrator, you can configure file systems, such as NFS, to mount automatically on demand. 19.1. The autofs service The autofs service can mount and unmount file systems automatically (on-demand), therefore saving system resources. It can be used to mount file systems such as NFS, AFS, SMBFS, CIFS, and local file systems. One drawback of permanent mounting using the /etc/fstab configuration is that, regardless of how infrequently a user accesses the mounted file system, the system must dedicate resources to keep the mounted file system in place. This might affect system performance when, for example, the system is maintaining NFS mounts to many systems at one time. An alternative to /etc/fstab is to use the kernel-based autofs service. It consists of the following components: A kernel module that implements a file system, and A user-space service that performs all of the other functions. Additional resources autofs(8) man page on your system 19.2. The autofs configuration files This section describes the usage and syntax of configuration files used by the autofs service. The master map file The autofs service uses /etc/auto.master (master map) as its default primary configuration file. This can be changed to use another supported network source and name using the autofs configuration in the /etc/autofs.conf configuration file in conjunction with the Name Service Switch (NSS) mechanism. All on-demand mount points must be configured in the master map. Mount point, host name, exported directory, and options can all be specified in a set of files (or other supported network sources) rather than configuring them manually for each host. The master map file lists mount points controlled by autofs , and their corresponding configuration files or network sources known as automount maps. The format of the master map is as follows: The variables used in this format are: mount-point The autofs mount point; for example, /mnt/data . map-file The map source file, which contains a list of mount points and the file system location from which those mount points should be mounted. options If supplied, these apply to all entries in the given map, if they do not themselves have options specified. Example 19.1. The /etc/auto.master file The following is a sample line from /etc/auto.master file: Map files Map files configure the properties of individual on-demand mount points. The automounter creates the directories if they do not exist. If the directories exist before the automounter was started, the automounter will not remove them when it exits. If a timeout is specified, the directory is automatically unmounted if the directory is not accessed for the timeout period. The general format of maps is similar to the master map. However, the options field appears between the mount point and the location instead of at the end of the entry as in the master map: The variables used in this format are: mount-point This refers to the autofs mount point. This can be a single directory name for an indirect mount or the full path of the mount point for direct mounts. Each direct and indirect map entry key ( mount-point ) can be followed by a space separated list of offset directories (subdirectory names each beginning with / ) making them what is known as a multi-mount entry. options When supplied, these options are appended to the master map entry options, if any, or used instead of the master map options if the configuration entry append_options is set to no . location This refers to the file system location such as a local file system path (preceded with the Sun map format escape character : for map names beginning with / ), an NFS file system or other valid file system location. Example 19.2. A map file The following is a sample from a map file; for example, /etc/auto.misc : The first column in the map file indicates the autofs mount point: sales and payroll from the server called personnel . The second column indicates the options for the autofs mount. The third column indicates the source of the mount. Following the given configuration, the autofs mount points will be /home/payroll and /home/sales . The -fstype= option is often omitted and is not needed if the file system is NFS, including mounts for NFSv4 if the system default is NFSv4 for NFS mounts. Using the given configuration, if a process requires access to an autofs unmounted directory such as /home/payroll/2006/July.sxc , the autofs service automatically mounts the directory. The amd map format The autofs service recognizes map configuration in the amd format as well. This is useful if you want to reuse existing automounter configuration written for the am-utils service, which has been removed from Red Hat Enterprise Linux. However, Red Hat recommends using the simpler autofs format described in the sections. Additional resources autofs(5) , autofs.conf(5) , and auto.master(5) man pages on your system /usr/share/doc/autofs/README.amd-maps file 19.3. Configuring autofs mount points Configure on-demand mount points by using the autofs service. Prerequisites Install the autofs package: Start and enable the autofs service: Procedure Create a map file for the on-demand mount point, located at /etc/auto. identifier . Replace identifier with a name that identifies the mount point. In the map file, enter the mount point, options, and location fields as described in The autofs configuration files section. Register the map file in the master map file, as described in The autofs configuration files section. Allow the service to re-read the configuration, so it can manage the newly configured autofs mount: Try accessing content in the on-demand directory: 19.4. Automounting NFS server user home directories with autofs service Configure the autofs service to mount user home directories automatically. Prerequisites The autofs package is installed. The autofs service is enabled and running. Procedure Specify the mount point and location of the map file by editing the /etc/auto.master file on a server on which you need to mount user home directories. To do so, add the following line into the /etc/auto.master file: Create a map file with the name of /etc/auto.home on a server on which you need to mount user home directories, and edit the file with the following parameters: You can skip fstype parameter, as it is nfs by default. For more information, see autofs(5) man page on your system. Reload the autofs service: 19.5. Overriding or augmenting autofs site configuration files It is sometimes useful to override site defaults for a specific mount point on a client system. Example 19.3. Initial conditions For example, consider the following conditions: Automounter maps are stored in NIS and the /etc/nsswitch.conf file has the following directive: The auto.master file contains: The NIS auto.master map file contains: The NIS auto.home map contains: The autofs configuration option BROWSE_MODE is set to yes : The file map /etc/auto.home does not exist. Procedure This section describes the examples of mounting home directories from a different server and augmenting auto.home with only selected entries. Example 19.4. Mounting home directories from a different server Given the preceding conditions, let's assume that the client system needs to override the NIS map auto.home and mount home directories from a different server. In this case, the client needs to use the following /etc/auto.master map: The /etc/auto.home map contains the entry: Because the automounter only processes the first occurrence of a mount point, the /home directory contains the content of /etc/auto.home instead of the NIS auto.home map. Example 19.5. Augmenting auto.home with only selected entries Alternatively, to augment the site-wide auto.home map with just a few entries: Create an /etc/auto.home file map, and in it put the new entries. At the end, include the NIS auto.home map. Then the /etc/auto.home file map looks similar to: With these NIS auto.home map conditions, listing the content of the /home directory outputs: This last example works as expected because autofs does not include the contents of a file map of the same name as the one it is reading. As such, autofs moves on to the map source in the nsswitch configuration. 19.6. Using LDAP to store automounter maps Configure autofs to store automounter maps in LDAP configuration rather than in autofs map files. Prerequisites LDAP client libraries must be installed on all systems configured to retrieve automounter maps from LDAP. On Red Hat Enterprise Linux, the openldap package should be installed automatically as a dependency of the autofs package. Procedure To configure LDAP access, modify the /etc/openldap/ldap.conf file. Ensure that the BASE , URI , and schema options are set appropriately for your site. The most recently established schema for storing automount maps in LDAP is described by the rfc2307bis draft. To use this schema, set it in the /etc/autofs.conf configuration file by removing the comment characters from the schema definition. For example: Example 19.6. Setting autofs configuration Ensure that all other schema entries are commented in the configuration. The automountKey attribute of the rfc2307bis schema replaces the cn attribute of the rfc2307 schema. Following is an example of an LDAP Data Interchange Format (LDIF) configuration: Example 19.7. LDIF Configuration Additional resources The rfc2307bis draft 19.7. Using systemd.automount to mount a file system on demand with /etc/fstab Mount a file system on demand using the automount systemd units when mount point is defined in /etc/fstab . You have to add an automount unit for each mount and enable it. Procedure Add desired fstab entry as documented in Persistently mounting file systems . For example: Add x-systemd.automount to the options field of entry created in the step. Load newly created units so that your system registers the new configuration: Start the automount unit: Verification Check that mount-point.automount is running: Check that automounted directory has desired content: Additional resources systemd.automount(5) and systemd.mount(5) man pages on your system Managing systemd 19.8. Using systemd.automount to mount a file system on-demand with a mount unit Mount a file system on-demand using the automount systemd units when mount point is defined by a mount unit. You have to add an automount unit for each mount and enable it. Procedure Create a mount unit. For example: Create a unit file with the same name as the mount unit, but with extension .automount . Open the file and create an [Automount] section. Set the Where= option to the mount path: Load newly created units so that your system registers the new configuration: Enable and start the automount unit instead: Verification Check that mount-point.automount is running: Check that automounted directory has desired content: Additional resources systemd.automount(5) and systemd.mount(5) man pages on your system Managing systemd | [
"mount-point map-name options",
"/mnt/data /etc/auto.data",
"mount-point options location",
"payroll -fstype=nfs4 personnel:/exports/payroll sales -fstype=xfs :/dev/hda4",
"yum install autofs",
"systemctl enable --now autofs",
"systemctl reload autofs.service",
"ls automounted-directory",
"/home /etc/auto.home",
"* -fstype=nfs,rw,sync host.example.com :/home/&",
"systemctl reload autofs",
"automount: files nis",
"+auto.master",
"/home auto.home",
"beth fileserver.example.com:/export/home/beth joe fileserver.example.com:/export/home/joe * fileserver.example.com:/export/home/&",
"BROWSE_MODE=\"yes\"",
"/home \\u00ad/etc/auto.home +auto.master",
"* host.example.com:/export/home/&",
"mydir someserver:/export/mydir +auto.home",
"ls /home beth joe mydir",
"DEFAULT_MAP_OBJECT_CLASS=\"automountMap\" DEFAULT_ENTRY_OBJECT_CLASS=\"automount\" DEFAULT_MAP_ATTRIBUTE=\"automountMapName\" DEFAULT_ENTRY_ATTRIBUTE=\"automountKey\" DEFAULT_VALUE_ATTRIBUTE=\"automountInformation\"",
"auto.master, example.com dn: automountMapName=auto.master,dc=example,dc=com objectClass: top objectClass: automountMap automountMapName: auto.master /home, auto.master, example.com dn: automountMapName=auto.master,dc=example,dc=com objectClass: automount automountKey: /home automountInformation: auto.home auto.home, example.com dn: automountMapName=auto.home,dc=example,dc=com objectClass: automountMap automountMapName: auto.home foo, auto.home, example.com dn: automountKey=foo,automountMapName=auto.home,dc=example,dc=com objectClass: automount automountKey: foo automountInformation: filer.example.com:/export/foo /, auto.home, example.com dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com objectClass: automount automountKey: / automountInformation: filer.example.com:/export/&",
"/dev/disk/by-id/da875760-edb9-4b82-99dc-5f4b1ff2e5f4 /mount/point xfs defaults 0 0",
"systemctl daemon-reload",
"systemctl start mount-point.automount",
"systemctl status mount-point.automount",
"ls /mount/point",
"mount-point.mount [Mount] What= /dev/disk/by-uuid/f5755511-a714-44c1-a123-cfde0e4ac688 Where= /mount/point Type= xfs",
"[Automount] Where= /mount/point [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl enable --now mount-point.automount",
"systemctl status mount-point.automount",
"ls /mount/point"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/mounting-file-systems-on-demand_managing-file-systems |
Chapter 77. JacksonXML | Chapter 77. JacksonXML Jackson XML is a Data Format which uses the Jackson library with the XMLMapper extension to unmarshal an XML payload into Java objects or to marshal Java objects into an XML payload. NOTE: If you are familiar with Jackson, this XML data format behaves in the same way as its JSON counterpart, and thus can be used with classes annotated for JSON serialization/deserialization. This extension also mimics JAXB's "Code first" approach . This data format relies on Woodstox (especially for features like pretty printing), a fast and efficient XML processor. from("activemq:My.Queue"). unmarshal().jacksonxml(). to("mqseries:Another.Queue"); 77.1. JacksonXML Options The JacksonXML dataformat supports 15 options, which are listed below. Name Default Java Type Description xmlMapper String Lookup and use the existing XmlMapper with the given id. prettyPrint false Boolean To enable pretty printing output nicely formatted. Is by default false. unmarshalType String Class name of the java type to use when unmarshalling. jsonView String When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. include String If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. allowJmsType Boolean Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. collectionType String Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. useList Boolean To unmarshal to a List of Map or a List of Pojo. enableJaxbAnnotationModule Boolean Whether to enable the JAXB annotations module when using jackson. When enabled then JAXB annotations can be used by Jackson. moduleClassNames String To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. moduleRefs String To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. enableFeatures String Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. disableFeatures String Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. allowUnmarshallType Boolean If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. contentTypeHeader Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. 77.1.1. Using Jackson XML in Spring DSL When using Data Format in Spring DSL you need to declare the data formats first. This is done in the DataFormats XML tag. <dataFormats> <!-- here we define a Xml data format with the id jack and that it should use the TestPojo as the class type when doing unmarshal. The unmarshalType is optional, if not provided Camel will use a Map as the type --> <jacksonxml id="jack" unmarshalType="org.apache.camel.component.jacksonxml.TestPojo"/> </dataFormats> And then you can refer to this id in the route: <route> <from uri="direct:back"/> <unmarshal><custom ref="jack"/></unmarshal> <to uri="mock:reverse"/> </route> 77.1.2. Excluding POJO fields from marshalling When marshalling a POJO to XML you might want to exclude certain fields from the XML output. With Jackson you can use JSON views to accomplish this. First create one or more marker classes. Use the marker classes with the @JsonView annotation to include/exclude certain fields. The annotation also works on getters. Finally use the Camel JacksonXMLDataFormat to marshall the above POJO to XML. Note that the weight field is missing in the resulting XML: <pojo age="30" weight="70"/> 77.2. Include/Exclude fields using the jsonView attribute with `JacksonXML`DataFormat As an example of using this attribute you can instead of: JacksonXMLDataFormat ageViewFormat = new JacksonXMLDataFormat(TestPojoView.class, Views.Age.class); from("direct:inPojoAgeView"). marshal(ageViewFormat); Directly specify your JSON view inside the Java DSL as: from("direct:inPojoAgeView"). marshal().jacksonxml(TestPojoView.class, Views.Age.class); And the same in XML DSL: <from uri="direct:inPojoAgeView"/> <marshal> <jacksonxml unmarshalType="org.apache.camel.component.jacksonxml.TestPojoView" jsonView="org.apache.camel.component.jacksonxml.ViewsUSDAge"/> </marshal> 77.3. Setting serialization include option If you want to marshal a pojo to XML, and the pojo has some fields with null values. And you want to skip these null values, then you need to set either an annotation on the pojo, @JsonInclude(Include.NON_NULL) public class MyPojo { ... } But this requires you to include that annotation in your pojo source code. You can also configure the Camel JacksonXMLDataFormat to set the include option, as shown below: JacksonXMLDataFormat format = new JacksonXMLDataFormat(); format.setInclude("NON_NULL"); Or from XML DSL you configure this as <dataFormats> <jacksonxml id="jacksonxml" include="NON_NULL"/> </dataFormats> 77.4. Unmarshalling from XML to POJO with dynamic class name If you use jackson to unmarshal XML to POJO, then you can now specify a header in the message that indicate which class name to unmarshal to. The header has key CamelJacksonUnmarshalType if that header is present in the message, then Jackson will use that as FQN for the POJO class to unmarshal the XML payload as. JacksonDataFormat format = new JacksonDataFormat(); format.setAllowJmsType(true); Or from XML DSL you configure this as <dataFormats> <jacksonxml id="jacksonxml" allowJmsType="true"/> </dataFormats> 77.5. Unmarshalling from XML to List<Map> or List<pojo> If you are using Jackson to unmarshal XML to a list of map/pojo, you can now specify this by setting useList="true" or use the org.apache.camel.component.jacksonxml.ListJacksonXMLDataFormat . For example with Java you can do as shown below: JacksonXMLDataFormat format = new ListJacksonXMLDataFormat(); // or JacksonXMLDataFormat format = new JacksonXMLDataFormat(); format.useList(); // and you can specify the pojo class type also format.setUnmarshalType(MyPojo.class); And if you use XML DSL then you configure to use list using useList attribute as shown below: <dataFormats> <jacksonxml id="jack" useList="true"/> </dataFormats> And you can specify the pojo type also <dataFormats> <jacksonxml id="jack" useList="true" unmarshalType="com.foo.MyPojo"/> </dataFormats> 77.6. Using custom Jackson modules You can use custom Jackson modules by specifying the class names of those using the moduleClassNames option as shown below. <dataFormats> <jacksonxml id="jack" useList="true" unmarshalType="com.foo.MyPojo" moduleClassNames="com.foo.MyModule,com.foo.MyOtherModule"/> </dataFormats> When using moduleClassNames then the custom jackson modules are not configured, by created using default constructor and used as-is. If a custom module needs any custom configuration, then an instance of the module can be created and configured, and then use modulesRefs to refer to the module as shown below: <bean id="myJacksonModule" class="com.foo.MyModule"> ... // configure the module as you want </bean> <dataFormats> <jacksonxml id="jacksonxml" useList="true" unmarshalType="com.foo.MyPojo" moduleRefs="myJacksonModule"/> </dataFormats> 77.7. Enabling or disable features using Jackson Jackson has a number of features you can enable or disable, which its ObjectMapper uses. For example to disable failing on unknown properties when marshalling, you can configure this using the disableFeatures: <dataFormats> <jacksonxml id="jacksonxml" unmarshalType="com.foo.MyPojo" disableFeatures="FAIL_ON_UNKNOWN_PROPERTIES"/> </dataFormats> You can disable multiple features by separating the values using comma. The values for the features must be the name of the enums from Jackson from the following enum classes com.fasterxml.jackson.databind.SerializationFeature com.fasterxml.jackson.databind.DeserializationFeature com.fasterxml.jackson.databind.MapperFeature To enable a feature use the enableFeatures options instead. From Java code you can use the type safe methods from camel-jackson module: JacksonDataFormat df = new JacksonDataFormat(MyPojo.class); df.disableFeature(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES); df.disableFeature(DeserializationFeature.FAIL_ON_NULL_FOR_PRIMITIVES); 77.8. Converting Maps to POJO using Jackson Jackson ObjectMapper can be used to convert maps to POJO objects. Jackson component comes with the data converter that can be used to convert java.util.Map instance to non-String, non-primitive and non-Number objects. Map<String, Object> invoiceData = new HashMap<String, Object>(); invoiceData.put("netValue", 500); producerTemplate.sendBody("direct:mapToInvoice", invoiceData); ... // Later in the processor Invoice invoice = exchange.getIn().getBody(Invoice.class); If there is a single ObjectMapper instance available in the Camel registry, it will used by the converter to perform the conversion. Otherwise the default mapper will be used. 77.9. Formatted XML marshalling (pretty-printing) Using the prettyPrint option one can output a well formatted XML while marshalling: <dataFormats> <jacksonxml id="jack" prettyPrint="true"/> </dataFormats> And in Java DSL: from("direct:inPretty").marshal().jacksonxml(true); Please note that there are 5 different overloaded jacksonxml() DSL methods which support the prettyPrint option in combination with other settings for unmarshalType , jsonView etc. 77.10. Dependencies To use Jackson XML in your camel routes you need to add the dependency on camel-jacksonxml which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jacksonxml</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> 77.11. Spring Boot Auto-Configuration When using jacksonxml with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jacksonxml-starter</artifactId> </dependency> The component supports 16 options, which are listed below. Name Description Default Type camel.dataformat.jacksonxml.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.jacksonxml.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.jacksonxml.collection-type Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.jacksonxml.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.jacksonxml.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.jacksonxml.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.jacksonxml.enable-jaxb-annotation-module Whether to enable the JAXB annotations module when using jackson. When enabled then JAXB annotations can be used by Jackson. false Boolean camel.dataformat.jacksonxml.enabled Whether to enable auto configuration of the jacksonxml data format. This is enabled by default. Boolean camel.dataformat.jacksonxml.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. String camel.dataformat.jacksonxml.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. String camel.dataformat.jacksonxml.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.jacksonxml.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.jacksonxml.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.jacksonxml.unmarshal-type Class name of the java type to use when unmarshalling. String camel.dataformat.jacksonxml.use-list To unmarshal to a List of Map or a List of Pojo. false Boolean camel.dataformat.jacksonxml.xml-mapper Lookup and use the existing XmlMapper with the given id. String | [
"from(\"activemq:My.Queue\"). unmarshal().jacksonxml(). to(\"mqseries:Another.Queue\");",
"<dataFormats> <!-- here we define a Xml data format with the id jack and that it should use the TestPojo as the class type when doing unmarshal. The unmarshalType is optional, if not provided Camel will use a Map as the type --> <jacksonxml id=\"jack\" unmarshalType=\"org.apache.camel.component.jacksonxml.TestPojo\"/> </dataFormats>",
"<route> <from uri=\"direct:back\"/> <unmarshal><custom ref=\"jack\"/></unmarshal> <to uri=\"mock:reverse\"/> </route>",
"<pojo age=\"30\" weight=\"70\"/>",
"JacksonXMLDataFormat ageViewFormat = new JacksonXMLDataFormat(TestPojoView.class, Views.Age.class); from(\"direct:inPojoAgeView\"). marshal(ageViewFormat);",
"from(\"direct:inPojoAgeView\"). marshal().jacksonxml(TestPojoView.class, Views.Age.class);",
"<from uri=\"direct:inPojoAgeView\"/> <marshal> <jacksonxml unmarshalType=\"org.apache.camel.component.jacksonxml.TestPojoView\" jsonView=\"org.apache.camel.component.jacksonxml.ViewsUSDAge\"/> </marshal>",
"@JsonInclude(Include.NON_NULL) public class MyPojo { }",
"JacksonXMLDataFormat format = new JacksonXMLDataFormat(); format.setInclude(\"NON_NULL\");",
"<dataFormats> <jacksonxml id=\"jacksonxml\" include=\"NON_NULL\"/> </dataFormats>",
"For JMS end users there is the JMSType header from the JMS spec that indicates that also. To enable support for JMSType you would need to turn that on, on the jackson data format as shown:",
"JacksonDataFormat format = new JacksonDataFormat(); format.setAllowJmsType(true);",
"<dataFormats> <jacksonxml id=\"jacksonxml\" allowJmsType=\"true\"/> </dataFormats>",
"JacksonXMLDataFormat format = new ListJacksonXMLDataFormat(); // or JacksonXMLDataFormat format = new JacksonXMLDataFormat(); format.useList(); // and you can specify the pojo class type also format.setUnmarshalType(MyPojo.class);",
"<dataFormats> <jacksonxml id=\"jack\" useList=\"true\"/> </dataFormats>",
"<dataFormats> <jacksonxml id=\"jack\" useList=\"true\" unmarshalType=\"com.foo.MyPojo\"/> </dataFormats>",
"<dataFormats> <jacksonxml id=\"jack\" useList=\"true\" unmarshalType=\"com.foo.MyPojo\" moduleClassNames=\"com.foo.MyModule,com.foo.MyOtherModule\"/> </dataFormats>",
"<bean id=\"myJacksonModule\" class=\"com.foo.MyModule\"> ... // configure the module as you want </bean> <dataFormats> <jacksonxml id=\"jacksonxml\" useList=\"true\" unmarshalType=\"com.foo.MyPojo\" moduleRefs=\"myJacksonModule\"/> </dataFormats>",
"Multiple modules can be specified separated by comma, such as moduleRefs=\"myJacksonModule,myOtherModule\"",
"<dataFormats> <jacksonxml id=\"jacksonxml\" unmarshalType=\"com.foo.MyPojo\" disableFeatures=\"FAIL_ON_UNKNOWN_PROPERTIES\"/> </dataFormats>",
"JacksonDataFormat df = new JacksonDataFormat(MyPojo.class); df.disableFeature(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES); df.disableFeature(DeserializationFeature.FAIL_ON_NULL_FOR_PRIMITIVES);",
"Map<String, Object> invoiceData = new HashMap<String, Object>(); invoiceData.put(\"netValue\", 500); producerTemplate.sendBody(\"direct:mapToInvoice\", invoiceData); // Later in the processor Invoice invoice = exchange.getIn().getBody(Invoice.class);",
"<dataFormats> <jacksonxml id=\"jack\" prettyPrint=\"true\"/> </dataFormats>",
"from(\"direct:inPretty\").marshal().jacksonxml(true);",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jacksonxml</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jacksonxml-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-jacksonxml-dataformat-starter |
Chapter 1. Red Hat Advanced Cluster Security for Kubernetes architecture | Chapter 1. Red Hat Advanced Cluster Security for Kubernetes architecture Discover Red Hat Advanced Cluster Security for Kubernetes architecture and concepts. 1.1. Red Hat Advanced Cluster Security for Kubernetes architecture overview Red Hat Advanced Cluster Security for Kubernetes (RHACS) uses a distributed architecture that supports high-scale deployments and is optimized to minimize the impact on the underlying OpenShift Container Platform or Kubernetes nodes. RHACS architecture The following graphic shows the architecture with the StackRox Scanner and Scanner V4 components. Installation of Scanner V4 is optional, but provides additional benefits. You install RHACS as a set of containers in your OpenShift Container Platform or Kubernetes cluster. RHACS includes the following services: Central services you install on one cluster Secured cluster services you install on each cluster you want to secure by RHACS In addition to these primary services, RHACS also interacts with other external components to enhance your clusters' security. Installation differences When you install RHACS on OpenShift Container Platform by using the Operator, RHACS installs a lightweight version of Scanner on every secured cluster. The lightweight Scanner enables the scanning of images in the integrated OpenShift image registry. When you install RHACS on OpenShift Container Platform or Kubernetes by using the Helm install method with the default values, the lightweight version of Scanner is not installed. To install the lightweight Scanner on the secured cluster by using Helm, you must set the scanner.disable=false parameter. You cannot install the lightweight Scanner by using the roxctl installation method. Additional resources External components 1.2. Central services You install Central services on a single cluster. These services include the following components: Central : Central is the RHACS application management interface and services. It handles API interactions and user interface (RHACS Portal) access. You can use the same Central instance to secure multiple OpenShift Container Platform or Kubernetes clusters. Central DB : Central DB is the database for RHACS and handles all data persistence. It is currently based on PostgreSQL 13. Scanner V4 : Beginning with version 4.4, RHACS contains the Scanner V4 vulnerability scanner for scanning container images. Scanner V4 is built on ClairCore , which also powers the Clair scanner. Scanner V4 supports scanning of language and OS-specific image components. For version 4.4, you must use this scanner in conjunction with the StackRox Scanner to provide node and platform scanning capabilities until Scanner V4 support those capabilities. Scanner V4 contains the Indexer, Matcher, and DB components. Scanner V4 Indexer : The Scanner V4 Indexer performs image indexing, previously known as image analysis. Given an image and registry credentials, the Indexer pulls the image from the registry. It finds the base operating system, if it exists, and looks for packages. It stores and outputs an index report, which contains the findings for the given image. Scanner V4 Matcher : The Scanner V4 Matcher performs vulnerability matching. If the Central services Scanner V4 Indexer indexed the image, then the Matcher fetches the index report from the Indexer and matches the report with the vulnerabilities stored in the Scanner V4 database. If a Secured Cluster services Scanner V4 Indexer performed the indexing, then the Matcher uses the index report that was sent from that Indexer, and then matches against vulnerabilities. The Matcher also fetches vulnerability data and updates the Scanner V4 database with the latest vulnerability data. The Scanner V4 Matcher outputs a vulnerability report, which contains the final results of an image. Scanner V4 DB : This database stores information for Scanner V4, including all vulnerability data and index reports. A persistent volume claim (PVC) is required for Scanner V4 DB on the cluster where Central is installed. StackRox Scanner : The StackRox Scanner is the default scanner in RHACS. Version 4.4 adds a new scanner, Scanner V4. The StackRox Scanner originates from a fork of the Clair v2 open source scanner. You must continue using this scanner for RHCOS node scanning and platform scanning. Scanner-DB : This database contains data for the StackRox Scanner. RHACS scanners analyze each image layer to determine the base operating system and identify programming language packages and packages that were installed by the operating system package manager. They match the findings against known vulnerabilities from various vulnerability sources. In addition, the StackRox Scanner identifies vulnerabilities in the node's operating system and platform. These capabilities are planned for Scanner V4 in a future release. 1.2.1. Vulnerability data sources Sources for vulnerabilities depend on the scanner that is used in your system. RHACS contains two scanners: StackRox Scanner and Scanner V4. StackRox Scanner is the default scanner and is deprecated beginning with release 4.6. Scanner V4 was introduced in release 4.4 and is the recommended image scanner. 1.2.1.1. StackRox Scanner sources StackRox Scanner uses the following vulnerability sources: Red Hat OVAL v2 Alpine Security Database Data tracked in Amazon Linux Security Center Debian Security Tracker Ubuntu CVE Tracker NVD : This is used for various purposes such as filling in information gaps when vendors do not provide information. For example, Alpine does not provide a description, CVSS score, severity, or published date. Note This product uses the NVD API but is not endorsed or certified by the NVD. Linux manual entries and NVD manual entries : The upstream StackRox project maintains a set of vulnerabilities that might not be discovered due to data formatting from other sources or absence of data. repository-to-cpe.json : Maps RPM repositories to their related CPEs, which is required for matching vulnerabilities for RHEL-based images. 1.2.1.2. Scanner V4 sources Scanner V4 uses the following vulnerability sources: Red Hat VEX Used with release 4.6 and later. This source provides vulnerability data in Vulnerability Exploitability eXchange(VEX) format. RHACS takes advantage of VEX benefits to significantly decrease the time needed for the initial loading of vulnerability data, and the space needed to store vulnerability data. RHACS might list a different number of vulnerabilities when you are scanning with a RHACS version that uses OVAL, such as RHACS version 4.5, and a version that uses VEX, such as version 4.6. For example, RHACS no longer displays vulnerabilities with a status of "under investigation," while these vulnerabilities were included with versions that used OVAL data. For more information about Red Hat security data, including information about the use of OVAL, Common Security Advisory Framework Version 2.0 (CSAF), and VEX, see The future of Red Hat security data . Red Hat CVE Map This is used in addition with VEX data for images which appear in the Red Hat Container Catalog . OSV This is used for language-related vulnerabilities, such as Go, Java, JavaScript, Python, and Ruby. This source might provide vulnerability IDs other than CVE IDs for vulnerabilities, such as a GitHub Security Advisory (GHSA) ID. Note RHACS uses the OSV database available at OSV.dev under Apache License 2.0 . NVD This is used for various purposes such as filling in information gaps when vendors do not provide information. For example, Alpine does not provide a description, CVSS score, severity, or published date. Note This product uses the NVD API but is not endorsed or certified by the NVD. Additional vulnerability sources Alpine Security Database Data tracked in Amazon Linux Security Center Debian Security Tracker Oracle OVAL Photon OVAL SUSE OVAL Ubuntu OVAL StackRox : The upstream StackRox project maintains a set of vulnerabilities that might not be discovered due to data formatting from other sources or absence of data. Scanner V4 Indexer sources Scanner V4 indexer uses the following files to index Red Hat containers: repository-to-cpe.json : Maps RPM repositories to their related CPEs, which is required for matching vulnerabilities for RHEL-based images. container-name-repos-map.json : This matches container names to their respective repositories. 1.3. Secured cluster services You install the secured cluster services on each cluster that you want to secure by using the Red Hat Advanced Cluster Security. Secured cluster services include the following components: Sensor : Sensor is the service responsible for analyzing and monitoring the cluster. Sensor listens to the OpenShift Container Platform or Kubernetes API and Collector events to report the current state of the cluster. Sensor also triggers deploy-time and runtime violations based on RHACS policies. In addition, Sensor is responsible for all cluster interactions, such as applying network policies, initiating reprocessing of RHACS policies, and interacting with the Admission controller. Admission controller : The Admission controller prevents users from creating workloads that violate security policies in RHACS. Collector : Collector analyzes and monitors container activity on cluster nodes. It collects container runtime and network activity information and sends the collected data to Sensor. StackRox Scanner : In Kubernetes, the secured cluster services include Scanner-slim as an optional component. However, on OpenShift Container Platform, RHACS installs a Scanner-slim version on each secured cluster to scan images in the OpenShift Container Platform integrated registry and optionally other registries. Scanner-DB : This database contains data for the StackRox Scanner. Scanner V4 : Scanner V4 components are installed on the secured cluster if enabled. Scanner V4 Indexer : The Scanner V4 Indexer performs image indexing, previously known as image analysis. Given an image and registry credentials, the Indexer pulls the image from the registry. It finds the base operating system, if it exists, and looks for packages. It stores and outputs an index report, which contains the findings for the given image. Scanner V4 DB : This component is installed if Scanner V4 is enabled. This database stores information for Scanner V4, including index reports. For best performance, configure a persistent volume claim (PVC) for Scanner V4 DB. Note When secured cluster services are installed on the same cluster as Central services and installed in the same namespace, secured cluster services do not deploy Scanner V4 components. Instead, it is assumed that Central services already include a deployment of Scanner V4. 1.4. External components Red Hat Advanced Cluster Security for Kubernetes (RHACS) interacts with the following external components: Third-party systems : You can integrate RHACS with other systems such as CI/CD pipelines, event management (SIEM) systems, logging, email, and more. roxctl : roxctl is a command-line interface (CLI) for running commands on RHACS. Image registries : You can integrate RHACS with various image registries and use RHACS to scan and view images. RHACS automatically configures registry integrations for active images by using the image pull secrets discovered in secured clusters. However, for scanning inactive images, you must manually configure registry integrations. definitions.stackrox.io : RHACS aggregates the data from various vulnerability feeds at the definitions.stackrox.io endpoint and passes this information to Central. The feeds include general, National Vulnerability Database (NVD) data, and distribution-specific data, such as Alpine, Debian, and Ubuntu. collector-modules.stackrox.io : Central reaches out to collector-modules.stackrox.io to obtain supported kernel modules and passes on these modules to Collector. 1.5. Interaction between the services This section explains how RHACS services interact with each other. Table 1.1. RHACS with Scanner V4 Component Direction Component Description Central ⮂ Scanner V4 Indexer Central requests the Indexer to download and index (analyze) given images. This process results in an index report. Scanner V4 Indexer requests mapping files from Central that assist the indexing process. Central ⮂ Scanner V4 Matcher Central requests that the Scanner V4 Matcher match given images to known vulnerabilities. This process results in the final scan result: a vulnerability report. Scanner V4 Matcher requests the latest vulnerabilities from Central. Sensor ⮂ Scanner V4 Indexer SecuredCluster scanning is enabled by default in Red Hat OpenShift environments deployed by using the Operator or when delegated scanning is used. When SecuredCluster scanning is enabled, Sensor requests Scanner V4 to index images. Scanner V4 Indexer requests mapping files from Sensor that assist the indexing process unless Central exists in the same namespace. In that case, Central is contacted instead. Scanner V4 Indexer -> Image Registries The Indexer pulls image metadata from registries to determine the layers of the image, and downloads each previously unindexed layer. Scanner V4 Matcher -> Scanner V4 Indexer Scanner V4 Matcher requests the results of the image indexing, the index report, from the Indexer. It then uses the report to determine relevant vulnerabilities. This interaction occurs only when the image is indexed in the Central cluster. This interaction does not occur when Scanner V4 is matching vulnerabilities for images indexed in secured clusters. Scanner V4 Indexer -> Scanner V4 DB The Indexer stores data related to the indexing results to ensure that image layers are only downloaded and indexed once. This prevents unnecessary network traffic and other resource utilization. Scanner V4 Matcher -> Scanner V4 DB Scanner V4 Matcher stores all of its vulnerability data in the database and periodically updates this data. Scanner V4 indexer also queries this data as part of the vulnerability matching process. Sensor ⮂ Central There is bidirectional communication between Central and Sensor. Sensor polls Central periodically to download updates for the sensor bundle configuration. It also sends events for the observed activity for the secured cluster and observed policy violations. Central communicates with Sensor to force reprocessing of all deployments against enabled policies. Collector ⮂ Sensor Collector communicates with Sensor and sends all of the events to the respective Sensor for the cluster. On supported OpenShift Container Platform clusters, Collector analyzes the software packages installed on the nodes and sends them to Sensor so that Scanner can later scan them for vulnerabilities. Collector also requests missing drivers from Sensor. Sensor requests compliance scan results from Collector. Additionally, Sensor receives external Classless Inter-Domain Routing information from Central and pushes it to Collector. Admission controller ⮂ Sensor Sensors send the list of security policies to enforce to Admission controller. Admission controller sends security policy violation alerts to Sensor. Admission controller can also request image scans from Sensor when required. Admission controller ➞ Central It is not common; however, Admission controller can communicate with Central directly if the Central endpoint is known and Sensor is unavailable. Table 1.2. RHACS with the StackRox Scanner Component Direction Interacts with Description Central ⮂ Scanner There is bidirectional communication between Central and Scanner. Central requests image scans from Scanner, and Scanner requests updates to its CVE database from Central. Central ➞ definitions.stackrox.io Central connects to the definitions.stackrox.io endpoint to receive the aggregated vulnerability information. Central ➞ collector-modules.stackrox.io Central downloads supported kernel modules from collector-modules.stackrox.io . Central ➞ Image registries Central queries the image registries to get image metadata. For example, to show Dockerfile instructions in the RHACS portal. Scanner ➞ Image registries Scanner pulls images from the image registry to identify vulnerabilities. Sensor ⮂ Central There is bidirectional communication between Central and Sensor. Sensor polls Central periodically for downloading updates for the sensor bundle configuration. It also sends events for the observed activity for the secured cluster and observed policy violations. Central communicates with Sensor to force reprocessing of all deployments against enabled policies. Sensor ⮂ Scanner Sensor can communicate with the lightweight Scanner installed in the secured cluster. This connection allows Sensor to access registries directly from the secured cluster in scenarios where Central might be unable to access them. Scanner requests updated data from Sensor, Sensor forwards these requests to Central, and Central downloads the requested data from definitions.stackrox.io . Collector ⮂ Sensor Collector communicates with Sensor and sends all of the events to the respective Sensor for the cluster. On supported OpenShift Container Platform clusters, Collector analyzes the software packages installed on the nodes and sends them to Sensor so that Scanner can later scan them for vulnerabilities. Collector also requests missing drivers from Sensor. Sensor requests compliance scan results from Collector. Additionally, Sensor receives external Classless Inter-Domain Routing information from Central and pushes it to Collector. Admission controller ⮂ Sensor Sensors send the list of security policies to enforce to Admission controller. Admission controller sends security policy violation alerts to Sensor. Admission controller can also request image scans from Sensor when required. Admission controller ➞ Central It is not common; however, Admission controller can communicate with Central directly if the Central endpoint is known and Sensor is unavailable. 1.6. RHACS connection protocols and default ports Components of RHACS use various default ports and connection protocols. Depending on your system and firewall configuration, you might need to configure your firewall to allow traffic on certain ports. The following table provides default ports and protocols for some connections within RHACS and between RHACS and external components. This is helpful for configuring your firewall to allow inbound and outbound cluster traffic. However, you might need more detailed information in some scenarios. For example, if your firewall is integrated in the cluster router, you might need to specify ports for connections that happen within one cluster but might be on different IP networks. In this scenario, you can use the RHACS network policy YAML files in your OpenShift Container Platform and Kubernetes clusters to determine connections and ports that you might need to configure. Table 1.3. RHACS connections between components Component or external entity Connection type Port Additional information Central and Scanner V4 Indexer gRPC 8443 Central and Sensor on secured cluster TCP/HTTPS gRPC 443 Sensor and Central primarily communicate over a bidirectional gRPC stream, initiated by Sensor to Central's port 443. Central and user (CLI) gRPC HTTPS (with --force-http1 option) 443 For more information about the --force-http1 option, see the roxctl command options. Central and vulnerability feeds HTTPS 443 Connects to definitions.stackrox.io by default. Collector to Sensor gRPC 443 This is a bidirectional gRPC connection initiated by Collector to Sensor's port 443. Collector (Compliance) to Sensor gRPC 8444 If node scanning is enabled on OpenShift Container Platform release 4, this connection is initiated by Sensor to compliance running in the Collector pod. Scanner to Scanner-DB TCP 5432 Scanner V4 Indexer to Central HTTPS 443 Scanner V4 Indexer and Matcher to Scanner V4 DB TCP 5432 Sensor and Admission Controller gRPC 443 This is a bidirectional gRPC stream, initiated by Admission Controller to Sensor's port 443. This occurs in delegated scanning scenarios or in OpenShift Container Platform secured clusters. Additional resources Installing Central with an external database using the Operator method roxctl command options | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/architecture/acs-architecture |
Chapter 12. Configuring alert notifications | Chapter 12. Configuring alert notifications In OpenShift Container Platform, an alert is fired when the conditions defined in an alerting rule are true. An alert provides a notification that a set of circumstances are apparent within a cluster. Firing alerts can be viewed in the Alerting UI in the OpenShift Container Platform web console by default. After an installation, you can configure OpenShift Container Platform to send alert notifications to external systems. 12.1. Sending notifications to external systems In OpenShift Container Platform 4.16, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types: PagerDuty Webhook Email Slack Microsoft Teams Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. 12.2. Additional resources About OpenShift Container Platform monitoring Configuring alert notifications | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/postinstallation_configuration/configuring-alert-notifications |
Chapter 22. Managing user sessions | Chapter 22. Managing user sessions 22.1. What GDM is The GNOME Display Manager (GDM) is a graphical login program running in the background that runs and manages the X.Org display servers for both local and remote logins. GDM is a replacement for XDM, the X Display Manager. However, GDM is not derived from XDM and does not contain any original XDM code. In addition, there is no support for a graphical configuration tool in GDM, so editing the /etc/gdm/custom.conf configuration file is necessary to change the GDM settings. 22.2. Restarting GDM When you make changes to the system configuration such as setting up the login screen banner message, login screen logo, or login screen background, restart GDM for your changes to take effect. Warning Restarting the gdm service terminates all currently running GNOME sessions of all desktop users who are logged in. This might result in users losing unsaved data. Procedure To restart the GDM service, run the following command: Procedure To display results of the GDM configuration, run the following command: 22.3. Adding an autostart application for all users You can set application to start automatically when any user logs into the GNOME environment. Procedure Create a .desktop file in the /etc/xdg/autostart/ directory, such as /etc/xdg/autostart/ nautilus .desktop . Enter the following content in the file: Replace Files with the name of the application. Replace /usr/bin/nautilus -n with the command that starts the application. Use the full file path. Optional: Configure the application to start only when a selected GSettings key is enabled. GNOME then runs the application automatically if the key's value is true. If the key's value changes in the running session, GNOME starts or stops the application to match the new value. Add the following line in the .desktop file: Replace org.gnome.desktop.background show-desktop-icons with the GSettings schema and key that the automatic start depends on. Additional resources You can also configure an autostart application for a specific user. Use the Tweaks application, which is available from the gnome-tweaks package. 22.4. Configuring automatic login As an administrator, you can enable automatic login from the Users panel in GNOME Settings , or you can set up automatic login manually in the GDM custom configuration file, as follows. Run the following procedure to set up automatic login for a user john . Procedure Edit the /etc/gdm/custom.conf file, and make sure that the [daemon] section in the file specifies the following: Replace john with the user that you want to be automatically logged in. 22.5. Configuring automatic logout User sessions that have been idle for a specific period of time can be ended automatically. You can set different behavior based on whether the machine is running from a battery or from mains power by setting the corresponding GSettings key, then locking it. Warning Users can potentially lose unsaved data if an idle session is automatically ended. To set automatic logout for a mains powered machine: Procedure Create a local database for machine-wide settings in the /etc/dconf/db/local.d/00-autologout file: Override the user's setting, and prevent the user from changing it in the /etc/dconf/db/local.d/locks/autologout file: Update the system databases: Users must log out and back in again before the system-wide settings take effect. The following GSettings keys are of interest: org.gnome.settings-daemon.plugins.power.sleep-inactive-ac-timeout The number of seconds that the computer needs to be inactive before it goes to sleep if it is running from AC power. org.gnome.settings-daemon.plugins.power.sleep-inactive-ac-type What should happen when the timeout has passed if the computer is running from AC power. org.gnome.settings-daemon.plugins.power.sleep-inactive-battery-timeout The number of seconds that the computer needs to be inactive before it goes to sleep if it is running from power. org.gnome.settings-daemon.plugins.power.sleep-inactive-battery-type What should happen when the timeout has passed if the computer is running from battery power. If you want to list available values for a key, use the following procedure: Procedure Run the gsettings range command on the required key. For example: 22.6. Setting a default desktop session for all users You can configure a default desktop session that is preselected for all users that have not logged in yet. If a user logs in using a different session than the default, their selection persists to their login. Procedure Copy the configuration file template: Edit the new /etc/accountsservice/user-templates/standard file. On the Session= gnome line, replace gnome with the session that you want to set as the default. Optional: To configure an exception to the default session for a certain user, follow these steps: Copy the template file to /var/lib/AccountsService/users/ user-name : In the new file, replace variables such as USD{USER} and USD{ID} with the user values. Edit the Session value. 22.7. Setting screen brightness and idle time By creating a local database, you can, for example: Configure the drop in the brightness level Set brightness level Set idle time Configuring the drop in the brightness level To set the drop in the brightness level when the device has been idle for some time: Procedure Create a local database for machine-wide settings in the /etc/dconf/db/local.d/00-power file including these lines: Update the system databases: Users must log out and back in again before the system-wide settings take effect. Setting brightness level To set brightness level: Procedure Create a local database for machine-wide settings in the /etc/dconf/db/local.d/00-power file, as in the following example: Replace 30 with the integer value you want to use. Update the system databases: Users must log out and back in again before the system-wide settings take effect. Setting idle time To set idle time after which the screen is blanked and the default screensaver is displayed: Procedure Create a local database for machine-wide settings in /etc/dconf/db/local.d/00-session , as in the following example: Replace 900 with the integer value you want to use. You must include the uint32 along with the integer value as shown. Update the system databases: Users must log out and back in again before the system-wide settings take effect. 22.8. Locking the screen when the user is idle To enable the screensaver and make the screen lock automatically when the user is idle, follow this procedure: Procedure Create a local database for system-wide settings in the etc/dconf/db/local.d/00-screensaver file: You must include the uint32 along with the integer key values as shown. Override the user's setting, and prevent the user from changing it in the /etc/dconf/db/local.d/locks/screensaver file: Update the system databases: Users must log out and back in again before the system-wide settings take effect. 22.9. Screencast recording GNOME Shell features a built-in screencast recorder. The recorder allows users to record desktop or application activity during their session and distribute the recordings as high-resolution video files in the webm format. To make a screencast: Procedure To start the recording, press the Ctrl + Alt + Shift + R shortcut. When the recorder is capturing the screen activity, it displays a red circle in the top-right corner of the screen. To stop the recording, press the Ctrl + Alt + Shift + R shortcut. The red circle in the top-right corner of the screen disappears. Navigate to the ~/Videos directory where you can find the recorded video with a file name that starts with Screencast and includes the date and time of the recording. Note The built-in recorder always captures the entire screen, including all monitors in multi-monitor setups. | [
"systemctl restart gdm.service",
"DCONF_PROFILE=gdm gsettings list-recursively org.gnome.login-screen",
"[Desktop Entry] Type=Application Name= Files Exec= /usr/bin/nautilus -n OnlyShowIn=GNOME; X-GNOME-Autostart-enabled=true",
"AutostartCondition=GSettings org.gnome.desktop.background show-desktop-icons",
"[daemon] AutomaticLoginEnable=True AutomaticLogin= john",
"Set the timeout to 900 seconds when on mains power sleep-inactive-ac-timeout=900 Set action after timeout to be logout when on mains power sleep-inactive-ac-type='logout'",
"Lock automatic logout settings /org/gnome/settings-daemon/plugins/power/sleep-inactive-ac-timeout /org/gnome/settings-daemon/plugins/power/sleep-inactive-ac-type",
"dconf update",
"gsettings range org.gnome.settings-daemon.plugins.power sleep-inactive-ac-type enum 'blank' 'suspend' 'shutdown' 'hibernate' 'interactive' 'nothing' 'logout'",
"cp /usr/share/accountsservice/user-templates/standard /etc/accountsservice/user-templates/standard",
"cp /usr/share/accountsservice/user-templates/standard /var/lib/AccountsService/users/ user-name",
"[org/gnome/settings-daemon/plugins/power] idle-dim=true",
"dconf update",
"[org/gnome/settings-daemon/plugins/power] idle-brightness=30",
"dconf update",
"[org/gnome/desktop/session] idle-delay=uint32 900",
"dconf update",
"Set the lock time out to 180 seconds before the session is considered idle idle-delay=uint32 180 Set this to true to lock the screen when the screensaver activates lock-enabled=true Set the lock timeout to 180 seconds after the screensaver has been activated lock-delay=uint32 180",
"Lock desktop screensaver settings /org/gnome/desktop/session/idle-delay /org/gnome/desktop/screensaver/lock-enabled /org/gnome/desktop/screensaver/lock-delay",
"dconf update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/managing-user-sessions_using-the-desktop-environment-in-rhel-8 |
20.2. Displaying the virsh Version | 20.2. Displaying the virsh Version The virsh version command displays the current libvirt version and displays information about the local virsh client. For example: The virsh version --daemon is useful for getting information about the libvirtd version and package information, including information about the libvirt daemon that is running on the host. | [
"USD virsh version Compiled against library: libvirt 1.2.8 Using library: libvirt 1.2.8 Using API: QEMU 1.2.8 Running hypervisor: QEMU 1.5.3",
"USD virsh version --daemon Compiled against library: libvirt 1.2.8 Using library: libvirt 1.2.8 Using API: QEMU 1.2.8 Running hypervisor: QEMU 1.5.3 Running against daemon: 1.2.8"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-generic_commands-version |
11.4. Schema Updates | 11.4. Schema Updates See Red Hat JBoss Data Virtualization Development Guide: Reference Material for supported DDL statements. To make schema updates persistent implementations should be provided for the following methods: String getViewDefinition(String vdbName, int vdbVersion, Table table); void setViewDefinition(String vdbName, int vdbVersion, Table table, String viewDefinition); String getInsteadOfTriggerDefinition(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation); void setInsteadOfTriggerDefinition(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation, String triggerDefinition); boolean isInsteadOfTriggerEnabled(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation); void setInsteadOfTriggerEnabled(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation, boolean enabled); String getProcedureDefinition(String vdbName, int vdbVersion, Procedure procedure); void setProcedureDefinition(String vdbName, int vdbVersion, Procedure procedure, String procedureDefinition); LinkedHashMap<String, String> getProperties(String vdbName, int vdbVersion, AbstractMetadataRecord record); void setProperty(String vdbName, int vdbVersion, AbstractMetadataRecord record, String name, String value); | [
"String getViewDefinition(String vdbName, int vdbVersion, Table table); void setViewDefinition(String vdbName, int vdbVersion, Table table, String viewDefinition); String getInsteadOfTriggerDefinition(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation); void setInsteadOfTriggerDefinition(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation, String triggerDefinition); boolean isInsteadOfTriggerEnabled(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation); void setInsteadOfTriggerEnabled(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation, boolean enabled); String getProcedureDefinition(String vdbName, int vdbVersion, Procedure procedure); void setProcedureDefinition(String vdbName, int vdbVersion, Procedure procedure, String procedureDefinition); LinkedHashMap<String, String> getProperties(String vdbName, int vdbVersion, AbstractMetadataRecord record); void setProperty(String vdbName, int vdbVersion, AbstractMetadataRecord record, String name, String value);"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/schema_updates |
Chapter 1. Introduction and goals of this article | Chapter 1. Introduction and goals of this article Now that you have installed Red Hat JBoss Data Virtualization successfully and explored the quick starts, you can look at the Dashboard Builder. Note that this article is aimed at more advanced users and it requires you to have a data source already configured. If you are a less advanced user, Red Hat recommends you read the Red Hat JBoss Data Virtualization Installation Guide and the Administration and Configuration Guide before proceeding to work through this article. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/using_the_dashboard_builder/introduction_and_goals_of_this_article |
5.20. byacc | 5.20. byacc 5.20.1. RHBA-2012:0749 - byacc bug fix update An updated byacc package that fixes one bug is now available for Red Hat Enterprise Linux 6. Berkeley Yacc (byacc) is a public domain look-ahead left-to-right (LALR) parser generator used by many programs during their build process. Bug Fix BZ# 743343 Byacc's maximum stack depth was reduced from 10000 to 500 between byacc releases. If deep enough else-if structures were present in source code being compiled with byacc, this could lead to out-of-memory conditions, resulting in YACC Stack Overflow and build failure. This updated release restores the maximum stack depth to its original value, 10000. Note: the underlying LR algorithm still imposes a hard limit on the number of parsable else-if statements. Restoring the maximum stack depth to its original value means source code with deep else-if structures that previously compiled against byacc will again do so. All byacc users should upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/byacc |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/high_availability_for_compute_instances/proc_providing-feedback-on-red-hat-documentation |
5.5. Creating a Mirrored LVM Logical Volume in a Cluster | 5.5. Creating a Mirrored LVM Logical Volume in a Cluster Creating a mirrored LVM logical volume in a cluster requires the same commands and procedures as creating a mirrored LVM logical volume on a single node. However, in order to create a mirrored LVM volume in a cluster the cluster and cluster mirror infrastructure must be running, the cluster must be quorate, and the locking type in the lvm.conf file must be set correctly to enable cluster locking, either directly or by means of the lvmconf command as described in Section 3.1, "Creating LVM Volumes in a Cluster" . The following procedure creates a mirrored LVM volume in a cluster. First the procedure checks to see whether the cluster services are installed and running, then the procedure creates the mirrored volume. In order to create a mirrored logical volume that is shared by all of the nodes in a cluster, the locking type must be set correctly in the lvm.conf file in every node of the cluster. By default, the locking type is set to local. To change this, execute the following command in each node of the cluster to enable clustered locking: To create a clustered logical volume, the cluster infrastructure must be up and running on every node in the cluster. The following example verifies that the clvmd daemon is running on the node from which it was issued: The following command shows the local view of the cluster status: Ensure that the cmirror and cmirror-kernel packages are installed. The cmirror-kernel package that must be installed depends on the kernel that is running. For example, if the running kernel is kernel-largesmp , it is necessary to have cmirror-kernel-largesmp for the corresponding kernel version. Start the cmirror service. Create the mirror. The first step is creating the physical volumes. The following commands create three physical volumes. Two of the physical volumes will be used for the legs of the mirror, and the third physical volume will contain the mirror log. Create the volume group. This example creates a volume group mirrorvg that consists of the three physical volumes that were created in the step. Note that the output of the vgcreate command indicates that the volume group is clustered. You can verify that a volume group is clustered with the vgs command, which will show the volume group's attributes. If a volume group is clustered, it will show a c attribute. Create the mirrored logical volume. This example creates the logical volume mirrorlv from the volume group mirrorvg . This volume has one mirror leg. This example specifies which extents of the physical volume will be used for the logical volume. You can use the lvs command to display the progress of the mirror creation. The following example shows that the mirror is 47% synced, then 91% synced, then 100% synced when the mirror is complete. The completion of the mirror is noted in the system log: You can use the lvs with the -o +devices options to display the configuration of the mirror, including which devices make up the mirror legs. You can see that the logical volume in this example is composed of two linear images and one log. For release RHEL 4.8 and later, you can use the seg_pe_ranges option of the lvs to display the data layout. You can use this option to verify that your layout is properly redundant. The output of this command dispays PE ranges in the same format that the lvcreate and lvresize commands take as input. When you create the mirrored volume, you create the clustered_log dlm space, which will contain the dlm logs for all mirrors. Note For information on recovering from the failure of one of the legs of an LVM mirrored volume, see Section 6.3, "Recovering from LVM Mirror Failure" . | [
"/usr/sbin/lvmconf --enable-cluster",
"ps auxw | grep clvmd root 17642 0.0 0.1 32164 1072 ? Ssl Apr06 0:00 clvmd -T20 -t 90",
"cman_tool services Service Name GID LID State Code DLM Lock Space: \"clvmd\" 7 3 run - [1 2 3]",
"service cmirror start Loading clustered mirror log: [ OK ]",
"pvcreate /dev/xvdb1 Physical volume \"/dev/xvdb1\" successfully created pvcreate /dev/xvdb2 Physical volume \"/dev/xvdb2\" successfully created pvcreate /dev/xvdc1 Physical volume \"/dev/xvdc1\" successfully created",
"vgcreate mirrorvg /dev/xvdb1 /dev/xvdb2 /dev/xvdc1 Clustered volume group \"mirrorvg\" successfully created",
"vgs mirrorvg VG #PV #LV #SN Attr VSize VFree mirrorvg 3 0 0 wz--nc 68.97G 68.97G",
"lvcreate -l 1000 -m1 mirrorvg -n mirrorlv /dev/xvdb1:1-1000 /dev/xvdb2:1-1000 /dev/xvdc1:0 Logical volume \"mirrorlv\" created",
"lvs mirrorvg/mirrorlv LV VG Attr LSize Origin Snap% Move Log Copy% Convert mirrorlv mirrorvg mwi-a- 3.91G mirrorlv_mlog 47.00 lvs mirrorvg/mirrorlv LV VG Attr LSize Origin Snap% Move Log Copy% Convert mirrorlv mirrorvg mwi-a- 3.91G mirrorlv_mlog 91.00 lvs mirrorvg/mirrorlv LV VG Attr LSize Origin Snap% Move Log Copy% Convert mirrorlv mirrorvg mwi-a- 3.91G mirrorlv_mlog 100.00",
"May 10 14:52:52 doc-07 [19402]: Monitoring mirror device mirrorvg-mirrorlv for events May 10 14:55:00 doc-07 lvm[19402]: mirrorvg-mirrorlv is now in-sync",
"lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices mirrorlv mirrorvg mwi-a- 3.91G mirrorlv_mlog 100.00 mirrorlv_mimage_0(0),mirrorlv_mimage_1(0) [mirrorlv_mimage_0] mirrorvg iwi-ao 3.91G /dev/xvdb1(1) [mirrorlv_mimage_1] mirrorvg iwi-ao 3.91G /dev/xvdb2(1) [mirrorlv_mlog] mirrorvg lwi-ao 4.00M /dev/xvdc1(0)",
"lvs -a -o seg_pe_ranges --segments PE Ranges mirrorlv_mimage_0:0-999 mirrorlv_mimage_1:0-999 /dev/xvdb1:1-1000 /dev/xvdb2:1-1000 /dev/xvdc1:0-0",
"cman_tool services Service Name GID LID State Code Fence Domain: \"default\" 4 2 run - [1 2 3] DLM Lock Space: \"clvmd\" 12 7 run - [1 2 3] DLM Lock Space: \"clustered_log\" 14 9 run - [1 2 3] User: \"usrm::manager\" 10 4 run - [1 2 3]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/mirvol_create_ex |
Chapter 4. PodMonitor [monitoring.coreos.com/v1] | Chapter 4. PodMonitor [monitoring.coreos.com/v1] Description PodMonitor defines monitoring for a set of pods. Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of desired Pod selection for target discovery by Prometheus. 4.1.1. .spec Description Specification of desired Pod selection for target discovery by Prometheus. Type object Required podMetricsEndpoints selector Property Type Description attachMetadata object Attaches node metadata to discovered targets. Only valid for role: pod. Only valid in Prometheus versions 2.35.0 and newer. jobLabel string The label to use to retrieve the job name from. labelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. labelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. labelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. namespaceSelector object Selector to select which namespaces the Endpoints objects are discovered from. podMetricsEndpoints array A list of endpoints allowed as part of this PodMonitor. podMetricsEndpoints[] object PodMetricsEndpoint defines a scrapeable endpoint of a Kubernetes Pod serving Prometheus metrics. podTargetLabels array (string) PodTargetLabels transfers labels on the Kubernetes Pod onto the target. sampleLimit integer SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. selector object Selector to select Pod objects. targetLimit integer TargetLimit defines a limit on the number of scraped targets that will be accepted. 4.1.2. .spec.attachMetadata Description Attaches node metadata to discovered targets. Only valid for role: pod. Only valid in Prometheus versions 2.35.0 and newer. Type object Property Type Description node boolean When set to true, Prometheus must have permissions to get Nodes. 4.1.3. .spec.namespaceSelector Description Selector to select which namespaces the Endpoints objects are discovered from. Type object Property Type Description any boolean Boolean describing whether all namespaces are selected in contrast to a list restricting them. matchNames array (string) List of namespace names to select from. 4.1.4. .spec.podMetricsEndpoints Description A list of endpoints allowed as part of this PodMonitor. Type array 4.1.5. .spec.podMetricsEndpoints[] Description PodMetricsEndpoint defines a scrapeable endpoint of a Kubernetes Pod serving Prometheus metrics. Type object Property Type Description authorization object Authorization section for this endpoint basicAuth object BasicAuth allow an endpoint to authenticate over basic authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint bearerTokenSecret object Secret to mount to read bearer token for scraping targets. The secret needs to be in the same namespace as the pod monitor and accessible by the Prometheus Operator. enableHttp2 boolean Whether to enable HTTP2. filterRunning boolean Drop pods that are not running. (Failed, Succeeded). Enabled by default. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase followRedirects boolean FollowRedirects configures whether scrape requests follow HTTP 3xx redirects. honorLabels boolean HonorLabels chooses the metric's labels on collisions with target labels. honorTimestamps boolean HonorTimestamps controls whether Prometheus respects the timestamps present in scraped data. interval string Interval at which metrics should be scraped If not specified Prometheus' global scrape interval is used. metricRelabelings array MetricRelabelConfigs to apply to samples before ingestion. metricRelabelings[] object RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs oauth2 object OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. params object Optional HTTP URL parameters params{} array (string) path string HTTP path to scrape for metrics. If empty, Prometheus uses the default value (e.g. /metrics ). port string Name of the pod port this endpoint refers to. Mutually exclusive with targetPort. proxyUrl string ProxyURL eg http://proxyserver:2195 Directs scrapes to proxy through this endpoint. relabelings array RelabelConfigs to apply to samples before scraping. Prometheus Operator automatically adds relabelings for a few standard Kubernetes fields. The original scrape job's name is available via the __tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelings[] object RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs scheme string HTTP scheme to use for scraping. scrapeTimeout string Timeout after which the scrape is ended If not specified, the Prometheus global scrape interval is used. targetPort integer-or-string Deprecated: Use 'port' instead. tlsConfig object TLS configuration to use when scraping the endpoint. 4.1.6. .spec.podMetricsEndpoints[].authorization Description Authorization section for this endpoint Type object Property Type Description credentials object The secret's key that contains the credentials of the request type string Set the authentication type. Defaults to Bearer, Basic will cause an error 4.1.7. .spec.podMetricsEndpoints[].authorization.credentials Description The secret's key that contains the credentials of the request Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.8. .spec.podMetricsEndpoints[].basicAuth Description BasicAuth allow an endpoint to authenticate over basic authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 4.1.9. .spec.podMetricsEndpoints[].basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.10. .spec.podMetricsEndpoints[].basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.11. .spec.podMetricsEndpoints[].bearerTokenSecret Description Secret to mount to read bearer token for scraping targets. The secret needs to be in the same namespace as the pod monitor and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.12. .spec.podMetricsEndpoints[].metricRelabelings Description MetricRelabelConfigs to apply to samples before ingestion. Type array 4.1.13. .spec.podMetricsEndpoints[].metricRelabelings[] Description RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs Type object Property Type Description action string Action to perform based on regex matching. Default is 'replace'. uppercase and lowercase actions require Prometheus >= 2.36. modulus integer Modulus to take of the hash of the source label values. regex string Regular expression against which the extracted value is matched. Default is '(.*)' replacement string Replacement value against which a regex replace is performed if the regular expression matches. Regex capture groups are available. Default is 'USD1' separator string Separator placed between concatenated source label values. default is ';'. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the replace, keep, and drop actions. targetLabel string Label to which the resulting value is written in a replace action. It is mandatory for replace actions. Regex capture groups are available. 4.1.14. .spec.podMetricsEndpoints[].oauth2 Description OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object The secret or configmap containing the OAuth2 client id clientSecret object The secret containing the OAuth2 client secret endpointParams object (string) Parameters to append to the token URL scopes array (string) OAuth2 scopes used for the token request tokenUrl string The URL to fetch the token from 4.1.15. .spec.podMetricsEndpoints[].oauth2.clientId Description The secret or configmap containing the OAuth2 client id Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 4.1.16. .spec.podMetricsEndpoints[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 4.1.17. .spec.podMetricsEndpoints[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.18. .spec.podMetricsEndpoints[].oauth2.clientSecret Description The secret containing the OAuth2 client secret Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.19. .spec.podMetricsEndpoints[].params Description Optional HTTP URL parameters Type object 4.1.20. .spec.podMetricsEndpoints[].relabelings Description RelabelConfigs to apply to samples before scraping. Prometheus Operator automatically adds relabelings for a few standard Kubernetes fields. The original scrape job's name is available via the __tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 4.1.21. .spec.podMetricsEndpoints[].relabelings[] Description RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs Type object Property Type Description action string Action to perform based on regex matching. Default is 'replace'. uppercase and lowercase actions require Prometheus >= 2.36. modulus integer Modulus to take of the hash of the source label values. regex string Regular expression against which the extracted value is matched. Default is '(.*)' replacement string Replacement value against which a regex replace is performed if the regular expression matches. Regex capture groups are available. Default is 'USD1' separator string Separator placed between concatenated source label values. default is ';'. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the replace, keep, and drop actions. targetLabel string Label to which the resulting value is written in a replace action. It is mandatory for replace actions. Regex capture groups are available. 4.1.22. .spec.podMetricsEndpoints[].tlsConfig Description TLS configuration to use when scraping the endpoint. Type object Property Type Description ca object Struct containing the CA cert to use for the targets. cert object Struct containing the client cert file for the targets. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 4.1.23. .spec.podMetricsEndpoints[].tlsConfig.ca Description Struct containing the CA cert to use for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 4.1.24. .spec.podMetricsEndpoints[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 4.1.25. .spec.podMetricsEndpoints[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.26. .spec.podMetricsEndpoints[].tlsConfig.cert Description Struct containing the client cert file for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 4.1.27. .spec.podMetricsEndpoints[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 4.1.28. .spec.podMetricsEndpoints[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.29. .spec.podMetricsEndpoints[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.30. .spec.selector Description Selector to select Pod objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.31. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.32. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/podmonitors GET : list objects of kind PodMonitor /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors DELETE : delete collection of PodMonitor GET : list objects of kind PodMonitor POST : create a PodMonitor /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors/{name} DELETE : delete a PodMonitor GET : read the specified PodMonitor PATCH : partially update the specified PodMonitor PUT : replace the specified PodMonitor 4.2.1. /apis/monitoring.coreos.com/v1/podmonitors Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind PodMonitor Table 4.2. HTTP responses HTTP code Reponse body 200 - OK PodMonitorList schema 401 - Unauthorized Empty 4.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors Table 4.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PodMonitor Table 4.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PodMonitor Table 4.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.8. HTTP responses HTTP code Reponse body 200 - OK PodMonitorList schema 401 - Unauthorized Empty HTTP method POST Description create a PodMonitor Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.10. Body parameters Parameter Type Description body PodMonitor schema Table 4.11. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 201 - Created PodMonitor schema 202 - Accepted PodMonitor schema 401 - Unauthorized Empty 4.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors/{name} Table 4.12. Global path parameters Parameter Type Description name string name of the PodMonitor namespace string object name and auth scope, such as for teams and projects Table 4.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PodMonitor Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.15. Body parameters Parameter Type Description body DeleteOptions schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PodMonitor Table 4.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.18. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PodMonitor Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body Patch schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PodMonitor Table 4.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.23. Body parameters Parameter Type Description body PodMonitor schema Table 4.24. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 201 - Created PodMonitor schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring_apis/podmonitor-monitoring-coreos-com-v1 |
Appendix B. Revision History | Appendix B. Revision History Revision History Revision 0.0-1.60 Mon Oct 07 2019 Jiri Herrmann Clarified a Technology Preview note related to OVMF. Revision 0.0-1.59 Mon Feb 04 2019 Lenka Spackova Improved structure of the book. Revision 0.0-1.58 Tue Feb 06 2018 Lenka Spackova Added a missing Technology Preview - OVMF (Virtualization). Added information regarding deprecation of containers using the libvirt-lxc tooling. Revision 0.0-1.57 Mon Oct 30 2017 Lenka Spackova Updated the cifs rebase description (New Features - File Systems). Added information on changes in the ld linker behavior to Deprecated Functionality. Revision 0.0-1.56 Fri Jul 14 2017 Lenka Spackova Added kexec to Technology Previews (Kernel). Revision 0.0-1.55 Tue Feb 14 2017 Lenka Spackova Updated the cifs rebase description (File Systems). Revision 0.0-1.53 Fri Oct 21 2016 Lenka Spackova Moved the i40e and i40evf drivers to fully supported (Networking). Revision 0.0-1.52 Fri Sep 23 2016 Lenka Spackova Added the qla3xxx driver to Deprecated Functionality. Added a change in behavior regarding expanding USDPWD to Known Issues. Revision 0.0-1.50 Mon Sep 19 2016 Lenka Spackova Minor fix to the OPA kernel driver note (Technology Previews). Revision 0.0-1.49 Tue Sep 13 2016 Lenka Spackova Updated Architectures. Added new variables for dracut (New Features - Kernel). Added a note on new oracle profile in Tuned (New Features - Servers and Services). Updated OverlayFS with an XFS-related note (Technology Previews - File Systems). Revision 0.0-1.48 Thu Aug 04 2016 Lenka Spackova The Atomic Host and Containers Release Notes are now separate; added a link to the new document. Revision 0.0-1.47 Mon Aug 01 2016 Lenka Spackova Added a note about limited support for Windows guest virtual machines to Deprecated Functionality. Revision 0.0-1.46 Thu Jul 11 2016 Yoana Ruseva Added a Known Issue for Atomic Host and Containers. Revision 0.0-1.45 Fri Jul 08 2016 Lenka Spackova Fixed version of the qla2xxx driver in Component Versions. Revision 0.0-1.44 Thu Jun 23 2016 Yoana Ruseva Updated the Atomic Host and Containers chapter with the release of Red Hat Enterprise Linux Atomic Host 7.2.5. Revision 0.0-1.43 Wed Jun 22 2016 Lenka Spackova Added two known issues to Installation and Booting. Revision 0.0-1.42 Mon Jun 13 2016 Lenka Spackova Moved "Multiple CPU support in kdump " from Technology Previews to fully supported New Features. Revision 0.0-1.41 Fri Jun 10 2016 Lenka Spackova Added two ReaR known issues. Revision 0.0-1.40 Mon Jun 06 2016 Lenka Spackova Updated Deprecated Functionality. Added a bug fix in ReaR and an OpenSSL known issue, which is valid for all Red Hat Enterprise Linux 7 minor releases. Revision 0.0-1.38 Thu May 19 2016 Yoana Ruseva Updated the New Features and Technology Previews chapters for Atomic Host and Containers. Revision 0.0-1.37 Thu May 12 2016 Lenka Spackova Updated the Atomic Host and Containers chapter with the release of Red Hat Enterprise Linux Atomic Host 7.2.4; two versions of the docker service are now available. Revision 0.0-1.36 Thu Apr 21 2016 Lenka Spackova Updated the Atomic Host and Containers chapter; added names of containers. Revision 0.0-1.35 Wed Apr 13 2016 Lenka Spackova Moved the kpatch utility from Technology Previews to supported New Features, see details in Chapter 10, Kernel . Revision 0.0-1.34 Thu Mar 31 2016 Lenka Spackova Updated the Atomic Host and Containers chapter with the release of Red Hat Enterprise Linux Atomic Host 7.2.3. Revision 0.0-1.33 Mon Mar 28 2016 Lenka Spackova Updated Deprecated Functionality, Technology Previews (clufter), New Features (winbindd). Revision 0.0-1.32 Mon Feb 29 2016 Lenka Spackova Removed information about the atomic host deploy sub-command, which is not available yet. Revision 0.0-1.31 Tue Feb 23 2016 Lenka Spackova Updated the Atomic Host and Containers chapter with information on dropping support for v1beta3 API. Revision 0.0-1.30 Tue Feb 16 2016 Lenka Spackova Updated the Atomic Host and Containers chapter with the release of Red Hat Enterprise Linux Atomic Host 7.2.2. Revision 0.0-1.29 Thu Feb 11 2016 Lenka Spackova Corrected the description of the RoCE Express feature for RDMA Technology Preview. Revision 0.0-1.28 Tue Jan 26 2016 Lenka Spackova Removed incorrect information about the Photos application from New Features (Desktop). Revision 0.0-1.27 Tue Jan 19 2016 Lenka Spackova Added a known issue (Installation and Booting). Revision 0.0-1.26 Wed Jan 13 2016 Lenka Spackova Added a bug fix regarding RMRR (Virtualization). Revision 0.0-1.25 Thu Dec 10 2015 Lenka Spackova Added a known issue (Installation and Booting). Revision 0.0-1.22 Wed Dec 02 2015 Lenka Spackova Added several known issues (Virtualization, Authentication). Revision 0.0-1.21 Thu Nov 19 2015 Lenka Spackova Release of the Red Hat Enterprise Linux 7.2 Release Notes. Revision 0.0-1.4 Mon Aug 31 2015 Laura Bailey Release of the Red Hat Enterprise Linux 7.2 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/appe-7.2_release_notes-revision_history |
Chapter 4. ControlPlaneMachineSet [machine.openshift.io/v1] | Chapter 4. ControlPlaneMachineSet [machine.openshift.io/v1] Description ControlPlaneMachineSet ensures that a specified number of control plane machine replicas are running at any given time. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ControlPlaneMachineSet represents the configuration of the ControlPlaneMachineSet. status object ControlPlaneMachineSetStatus represents the status of the ControlPlaneMachineSet CRD. 4.1.1. .spec Description ControlPlaneMachineSet represents the configuration of the ControlPlaneMachineSet. Type object Required replicas selector template Property Type Description replicas integer Replicas defines how many Control Plane Machines should be created by this ControlPlaneMachineSet. This field is immutable and cannot be changed after cluster installation. The ControlPlaneMachineSet only operates with 3 or 5 node control planes, 3 and 5 are the only valid values for this field. selector object Label selector for Machines. Existing Machines selected by this selector will be the ones affected by this ControlPlaneMachineSet. It must match the template's labels. This field is considered immutable after creation of the resource. state string State defines whether the ControlPlaneMachineSet is Active or Inactive. When Inactive, the ControlPlaneMachineSet will not take any action on the state of the Machines within the cluster. When Active, the ControlPlaneMachineSet will reconcile the Machines and will update the Machines as necessary. Once Active, a ControlPlaneMachineSet cannot be made Inactive. To prevent further action please remove the ControlPlaneMachineSet. strategy object Strategy defines how the ControlPlaneMachineSet will update Machines when it detects a change to the ProviderSpec. template object Template describes the Control Plane Machines that will be created by this ControlPlaneMachineSet. 4.1.2. .spec.selector Description Label selector for Machines. Existing Machines selected by this selector will be the ones affected by this ControlPlaneMachineSet. It must match the template's labels. This field is considered immutable after creation of the resource. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.3. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.4. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.1.5. .spec.strategy Description Strategy defines how the ControlPlaneMachineSet will update Machines when it detects a change to the ProviderSpec. Type object Property Type Description type string Type defines the type of update strategy that should be used when updating Machines owned by the ControlPlaneMachineSet. Valid values are "RollingUpdate" and "OnDelete". The current default value is "RollingUpdate". 4.1.6. .spec.template Description Template describes the Control Plane Machines that will be created by this ControlPlaneMachineSet. Type object Required machineType Property Type Description machineType string MachineType determines the type of Machines that should be managed by the ControlPlaneMachineSet. Currently, the only valid value is machines_v1beta1_machine_openshift_io. machines_v1beta1_machine_openshift_io object OpenShiftMachineV1Beta1Machine defines the template for creating Machines from the v1beta1.machine.openshift.io API group. 4.1.7. .spec.template.machines_v1beta1_machine_openshift_io Description OpenShiftMachineV1Beta1Machine defines the template for creating Machines from the v1beta1.machine.openshift.io API group. Type object Required metadata spec Property Type Description failureDomains object FailureDomains is the list of failure domains (sometimes called availability zones) in which the ControlPlaneMachineSet should balance the Control Plane Machines. This will be merged into the ProviderSpec given in the template. This field is optional on platforms that do not require placement information. metadata object ObjectMeta is the standard object metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Labels are required to match the ControlPlaneMachineSet selector. spec object Spec contains the desired configuration of the Control Plane Machines. The ProviderSpec within contains platform specific details for creating the Control Plane Machines. The ProviderSe should be complete apart from the platform specific failure domain field. This will be overriden when the Machines are created based on the FailureDomains field. 4.1.8. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains Description FailureDomains is the list of failure domains (sometimes called availability zones) in which the ControlPlaneMachineSet should balance the Control Plane Machines. This will be merged into the ProviderSpec given in the template. This field is optional on platforms that do not require placement information. Type object Required platform Property Type Description aws array AWS configures failure domain information for the AWS platform. aws[] object AWSFailureDomain configures failure domain information for the AWS platform. azure array Azure configures failure domain information for the Azure platform. azure[] object AzureFailureDomain configures failure domain information for the Azure platform. gcp array GCP configures failure domain information for the GCP platform. gcp[] object GCPFailureDomain configures failure domain information for the GCP platform platform string Platform identifies the platform for which the FailureDomain represents. Currently supported values are AWS, Azure, and GCP. 4.1.9. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws Description AWS configures failure domain information for the AWS platform. Type array 4.1.10. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[] Description AWSFailureDomain configures failure domain information for the AWS platform. Type object Property Type Description placement object Placement configures the placement information for this instance. subnet object Subnet is a reference to the subnet to use for this instance. 4.1.11. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].placement Description Placement configures the placement information for this instance. Type object Required availabilityZone Property Type Description availabilityZone string AvailabilityZone is the availability zone of the instance. 4.1.12. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].subnet Description Subnet is a reference to the subnet to use for this instance. Type object Required type Property Type Description arn string ARN of resource. filters array Filters is a set of filters used to identify a resource. filters[] object AWSResourceFilter is a filter used to identify an AWS resource id string ID of resource. type string Type determines how the reference will fetch the AWS resource. 4.1.13. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].subnet.filters Description Filters is a set of filters used to identify a resource. Type array 4.1.14. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].subnet.filters[] Description AWSResourceFilter is a filter used to identify an AWS resource Type object Required name Property Type Description name string Name of the filter. Filter names are case-sensitive. values array (string) Values includes one or more filter values. Filter values are case-sensitive. 4.1.15. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.azure Description Azure configures failure domain information for the Azure platform. Type array 4.1.16. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.azure[] Description AzureFailureDomain configures failure domain information for the Azure platform. Type object Required zone Property Type Description zone string Availability Zone for the virtual machine. If nil, the virtual machine should be deployed to no zone. 4.1.17. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.gcp Description GCP configures failure domain information for the GCP platform. Type array 4.1.18. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.gcp[] Description GCPFailureDomain configures failure domain information for the GCP platform Type object Required zone Property Type Description zone string Zone is the zone in which the GCP machine provider will create the VM. 4.1.19. .spec.template.machines_v1beta1_machine_openshift_io.metadata Description ObjectMeta is the standard object metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Labels are required to match the ControlPlaneMachineSet selector. Type object Required labels Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels . This field must contain both the 'machine.openshift.io/cluster-api-machine-role' and 'machine.openshift.io/cluster-api-machine-type' labels, both with a value of 'master'. It must also contain a label with the key 'machine.openshift.io/cluster-api-cluster'. 4.1.20. .spec.template.machines_v1beta1_machine_openshift_io.spec Description Spec contains the desired configuration of the Control Plane Machines. The ProviderSpec within contains platform specific details for creating the Control Plane Machines. The ProviderSe should be complete apart from the platform specific failure domain field. This will be overriden when the Machines are created based on the FailureDomains field. Type object Property Type Description lifecycleHooks object LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. metadata object ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. providerID string ProviderID is the identification ID of the machine provided by the provider. This field must match the provider ID as seen on the node object corresponding to this machine. This field is required by higher level consumers of cluster-api. Example use case is cluster autoscaler with cluster-api as provider. Clean-up logic in the autoscaler compares machines to nodes to find out machines at provider which could not get registered as Kubernetes nodes. With cluster-api as a generic out-of-tree provider for autoscaler, this field is required by autoscaler to be able to have a provider view of the list of machines. Another list of nodes is queried from the k8s apiserver and then a comparison is done to find out unregistered machines and are marked for delete. This field will be set by the actuators and consumed by higher level entities like autoscaler that will be interfacing with cluster-api as generic provider. providerSpec object ProviderSpec details Provider-specific configuration to use during node creation. taints array The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. 4.1.21. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks Description LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. Type object Property Type Description preDrain array PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. preDrain[] object LifecycleHook represents a single instance of a lifecycle hook preTerminate array PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. preTerminate[] object LifecycleHook represents a single instance of a lifecycle hook 4.1.22. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preDrain Description PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. Type array 4.1.23. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preDrain[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 4.1.24. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preTerminate Description PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. Type array 4.1.25. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preTerminate[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 4.1.26. .spec.template.machines_v1beta1_machine_openshift_io.spec.metadata Description ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 4.1.27. .spec.template.machines_v1beta1_machine_openshift_io.spec.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 4.1.28. .spec.template.machines_v1beta1_machine_openshift_io.spec.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid string UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids 4.1.29. .spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec Description ProviderSpec details Provider-specific configuration to use during node creation. Type object Property Type Description value `` Value is an inlined, serialized representation of the resource configuration. It is recommended that providers maintain their own versioned API types that should be serialized/deserialized from this field, akin to component config. 4.1.30. .spec.template.machines_v1beta1_machine_openshift_io.spec.taints Description The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints Type array 4.1.31. .spec.template.machines_v1beta1_machine_openshift_io.spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 4.1.32. .status Description ControlPlaneMachineSetStatus represents the status of the ControlPlaneMachineSet CRD. Type object Property Type Description conditions array Conditions represents the observations of the ControlPlaneMachineSet's current state. Known .status.conditions.type are: Available, Degraded and Progressing. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } observedGeneration integer ObservedGeneration is the most recent generation observed for this ControlPlaneMachineSet. It corresponds to the ControlPlaneMachineSets's generation, which is updated on mutation by the API Server. readyReplicas integer ReadyReplicas is the number of Control Plane Machines created by the ControlPlaneMachineSet controller which are ready. Note that this value may be higher than the desired number of replicas while rolling updates are in-progress. replicas integer Replicas is the number of Control Plane Machines created by the ControlPlaneMachineSet controller. Note that during update operations this value may differ from the desired replica count. unavailableReplicas integer UnavailableReplicas is the number of Control Plane Machines that are still required before the ControlPlaneMachineSet reaches the desired available capacity. When this value is non-zero, the number of ReadyReplicas is less than the desired Replicas. updatedReplicas integer UpdatedReplicas is the number of non-terminated Control Plane Machines created by the ControlPlaneMachineSet controller that have the desired provider spec and are ready. This value is set to 0 when a change is detected to the desired spec. When the update strategy is RollingUpdate, this will also coincide with starting the process of updating the Machines. When the update strategy is OnDelete, this value will remain at 0 until a user deletes an existing replica and its replacement has become ready. 4.1.33. .status.conditions Description Conditions represents the observations of the ControlPlaneMachineSet's current state. Known .status.conditions.type are: Available, Degraded and Progressing. Type array 4.1.34. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 4.2. API endpoints The following API endpoints are available: /apis/machine.openshift.io/v1/controlplanemachinesets GET : list objects of kind ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets DELETE : delete collection of ControlPlaneMachineSet GET : list objects of kind ControlPlaneMachineSet POST : create a ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name} DELETE : delete a ControlPlaneMachineSet GET : read the specified ControlPlaneMachineSet PATCH : partially update the specified ControlPlaneMachineSet PUT : replace the specified ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/scale GET : read scale of the specified ControlPlaneMachineSet PATCH : partially update scale of the specified ControlPlaneMachineSet PUT : replace scale of the specified ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/status GET : read status of the specified ControlPlaneMachineSet PATCH : partially update status of the specified ControlPlaneMachineSet PUT : replace status of the specified ControlPlaneMachineSet 4.2.1. /apis/machine.openshift.io/v1/controlplanemachinesets Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind ControlPlaneMachineSet Table 4.2. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSetList schema 401 - Unauthorized Empty 4.2.2. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets Table 4.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ControlPlaneMachineSet Table 4.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ControlPlaneMachineSet Table 4.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.8. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSetList schema 401 - Unauthorized Empty HTTP method POST Description create a ControlPlaneMachineSet Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.10. Body parameters Parameter Type Description body ControlPlaneMachineSet schema Table 4.11. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 201 - Created ControlPlaneMachineSet schema 202 - Accepted ControlPlaneMachineSet schema 401 - Unauthorized Empty 4.2.3. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name} Table 4.12. Global path parameters Parameter Type Description name string name of the ControlPlaneMachineSet namespace string object name and auth scope, such as for teams and projects Table 4.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ControlPlaneMachineSet Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.15. Body parameters Parameter Type Description body DeleteOptions schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ControlPlaneMachineSet Table 4.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.18. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ControlPlaneMachineSet Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body Patch schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ControlPlaneMachineSet Table 4.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.23. Body parameters Parameter Type Description body ControlPlaneMachineSet schema Table 4.24. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 201 - Created ControlPlaneMachineSet schema 401 - Unauthorized Empty 4.2.4. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/scale Table 4.25. Global path parameters Parameter Type Description name string name of the ControlPlaneMachineSet namespace string object name and auth scope, such as for teams and projects Table 4.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified ControlPlaneMachineSet Table 4.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.28. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ControlPlaneMachineSet Table 4.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.30. Body parameters Parameter Type Description body Patch schema Table 4.31. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ControlPlaneMachineSet Table 4.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.33. Body parameters Parameter Type Description body Scale schema Table 4.34. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 4.2.5. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/status Table 4.35. Global path parameters Parameter Type Description name string name of the ControlPlaneMachineSet namespace string object name and auth scope, such as for teams and projects Table 4.36. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ControlPlaneMachineSet Table 4.37. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.38. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ControlPlaneMachineSet Table 4.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.40. Body parameters Parameter Type Description body Patch schema Table 4.41. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ControlPlaneMachineSet Table 4.42. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.43. Body parameters Parameter Type Description body ControlPlaneMachineSet schema Table 4.44. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 201 - Created ControlPlaneMachineSet schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_apis/controlplanemachineset-machine-openshift-io-v1 |
Chapter 3. Building and running JBoss EAP applications on OpenShift Container Platform | Chapter 3. Building and running JBoss EAP applications on OpenShift Container Platform You can follow the source-to-image (S2I) process to build and run a Java application on the JBoss EAP for OpenShift image. 3.1. Prerequisites You have an OpenShift instance installed and operational. 3.2. Preparing OpenShift to deploy an application As a JBoss EAP application developer, you can deploy your applications on OpenShift. In the following example, note that the kitchensink quickstart demonstrates a Jakarta EE web-enabled database application using Jakarta Server Faces, Jakarta Contexts and Dependency Injection, Jakarta Enterprise Beans, Jakarta Persistence, and Jakarta Bean Validation. See the JBoss EAP 8.0 kitchensink quickstart for more information. Deploy your application by following the procedures below. Procedure Log in to your OpenShift instance using the oc login command. Create a project in OpenShift. Create a project using the following command. With a project, you can organize and manage content separately from other groups. For example, for the kitchensink quickstart, create a project named eap-demo using the following command: Optional : Create a keystore and a secret. Note You must create a keystore and a secret if you use any HTTPS-enabled features in your OpenShift project. Use the Java keytool command to generate a keystore: Warning The following commands generate a self-signed certificate, but for production environments, use your own SSL certificate from a verified certificate authority (CA) for SSL-encrypted connections (HTTPS). For example, for the kitchensink quickstart, use the following command to generate a keystore: Use the following command to create a secret from your new keystore: For example, for the kitchensink quickstart, use the following command to create a secret: Additional resources ImageStreams and Pods fail to pull images when Dev Portal generated secret is added in the namespace 3.3. Building application images using source-to-image in OpenShift Follow the source-to-image (S2I) workflow to build reproducible container images for a JBoss EAP application. These generated container images include the application deployment and ready-to-run JBoss EAP servers. The S2I workflow takes source code from a Git repository and injects it into a container that's based on the language and framework you want to use. After the S2I workflow is completed, the src code is compiled, the application is packaged and is deployed to the JBoss EAP server. For more information, see Legacy server provisioning for JBoss EAP S2I . Note In JBoss EAP, you can use S2I images only if you develop your application using Jakarta EE 10. Prerequisites You have an active Red Hat customer account. You have a Registry Service Account. Follow the instructions on the Red Hat Customer Portal to create an authentication token using a registry service account . You have downloaded the OpenShift secret YAML file, which you can use to pull images from Red Hat Ecosystem Catalog. For more information, see OpenShift Secret . You used the oc login command to log in to OpenShift. You have installed Helm. For more information, see Installing Helm . You have installed the repository for the JBoss EAP Helm charts by entering this command in the management CLI: Procedure Create a file named helm.yaml using the following YAML content: Use the following command to deploy your JBoss EAP application on OpenShift. Verification Access the application using curl . You get the output Hello World! confirming that the application is deployed. 3.4. Deploying a third-party application on OpenShift You can create application images for OpenShift deployments by using compiled WAR files or EAR archives. Use a Dockerfile to deploy these archives onto JBoss EAP server, along with an updated and comprehensive runtime stack that includes the operating system, Java, and JBoss EAP components. Note Red Hat do not provide pre-built JBoss EAP server images. 3.4.1. Provisioning JBoss EAP servers with the default configuration You can install and configure a JBoss EAP server with its default configuration on OpenShift by using the builder image. For seamless deployment, follow the procedure to provision the server, transfer the application files, and make any necessary customization. Prerequisites You have access to the supported Red Hat JBoss Enterprise Application Platform container images. For example: registry.redhat.io/jboss-eap-8/eap8-openjdk17-builder-openshift-rhel8 registry.redhat.io/jboss-eap-8/eap8-openjdk17-runtime-openshift-rhel8 You have podman installed on your system. Use the latest podman version available on supported RHEL. For more information, see Red Hat JBoss Enterprise Application Platform 8.0 Supported Configurations . Procedure Copy the following Dockerfile contents as provided: # Use EAP 8 Builder image to create a JBoss EAP 8 server # with its default configuration FROM registry.redhat.io/jboss-eap-8/eap8-openjdk17-builder-openshift-rhel8:latest AS builder # Set up environment variables for provisioning. 1 ENV GALLEON_PROVISION_FEATURE_PACKS org.jboss.eap:wildfly-ee-galleon-pack,org.jboss.eap.cloud:eap-cloud-galleon-pack ENV GALLEON_PROVISION_LAYERS cloud-default-config # Specify the JBoss EAP version 2 ENV GALLEON_PROVISION_CHANNELS org.jboss.eap.channels:eap-8.0 # Run the assemble script to provision the server. RUN /usr/local/s2i/assemble # Copy the JBoss EAP 8 server from the builder image to the runtime image. FROM registry.redhat.io/jboss-eap-8/eap8-openjdk17-runtime-openshift-rhel8:latest AS runtime # Set appropriate ownership and permissions. COPY --from=builder --chown=jboss:root USDJBOSS_HOME USDJBOSS_HOME # Steps to add: # (1) COPY the WAR/EAR to USDJBOSS_HOME/standalone/deployments # with the jboss:root user. For example: # COPY --chown=jboss:root my-app.war USDJBOSS_HOME/standalone/deployments 3 # (2) (optional) server modification. You can modify EAP server configuration: # # * invoke management operations. For example # # RUN USDJBOSS_HOME/bin/jboss-cli.sh --commands="embed-server,/system-property=Foo:add(value=Bar)" # # First operation must always be embed-server. # # * copy a modified standalone.xml in USDJBOSS_HOME/standalone/configuration/ # for example # # COPY --chown=jboss:root standalone.xml USDJBOSS_HOME/standalone/configuration # Ensure appropriate permissions for the copied files. RUN chmod -R ug+rwX USDJBOSS_HOME 1 You can specify the MAVEN_MIRROR_URL environment variable, which is used by the JBoss EAP Maven plugin internally within the image. For more information, see Artifact repository mirrors . 2 You do not need to update this Dockerfile for any of the minor releases. Specify the JBoss EAP version in the GALLEON_PROVISION_CHANNELS environment variable if you want to use a specific version. For more information, see Environment variables . 3 Modify the copied Dockerfile to include your WAR file in the container. For example: COPY --chown=jboss:root <my-app.war> USDJBOSS_HOME/standalone/deployments Replace <myapp.war> with the path to the Web archive you want to add to the image. Build the application image using podman: USD podman build -t my-app . After the command is executed, the my-app container image is ready to be deployed on OpenShift. Upload your container image to one of the following options: Your internal registry that is accessible from OpenShift. The OpenShift registry by pushing the image directly from the machine where it was built. For more information, see How to push a container image into the image registry in RHOCP 4 . When deploying your image from the registry, use deployment strategies such as Helm charts, Operator, or Deployment. Select your preferred method and use either the full image URL or ImageStreams based on your requirements. For more information, see Using Helm charts to build and deploy JBoss EAP applications on OpenShift . 3.5. Using OpenID Connect to secure JBoss EAP applications on OpenShift Use the JBoss EAP native OpenID Connect (OIDC) client to delegate authentication using an external OpenID provider. OIDC is an identity layer that enables clients, such as JBoss EAP, to verify a user's identity based on the authentication performed by an OpenID provider. The elytron-oidc-client subsystem and elytron-oidc-client Galleon layer provides a native OIDC client in JBoss EAP to connect with OpenID providers. JBoss EAP automatically creates a virtual security domain for your application, based on your OpenID provider configurations. You can configure the elytron-oidc-client subsystem in three different ways: Adding an oidc.json into your deployment. Running a CLI script to configure the elytron-oidc-client subsystem. Defining environment variables to configure an elytron-oidc-client subsystem on start of JBoss EAP server on OpenShift. Note This procedure explains how you can configure an elytron-oidc-client subsystem using the environment variables to secure application with OIDC. 3.5.1. OpenID Connect configuration in JBoss EAP When you secure your applications using an OpenID provider, you do not need to configure any security domain resources locally. The elytron-oidc-client subsystem provides a native OpenID Connect (OIDC) client in JBoss EAP to connect with OpenID providers. JBoss EAP automatically creates a virtual security domain for your application, based on your OpenID provider configurations. Important Use the OIDC client with Red Hat build of Keycloak. You can use other OpenID providers if they can be configured to use access tokens that are JSON Web Tokens (JWTs) and can be configured to use the RS256, RS384, RS512, ES256, ES384, or ES512 signature algorithm. To enable the use of OIDC, you can configure either the elytron-oidc-client subsystem or an application itself. JBoss EAP activates the OIDC authentication as follows: When you deploy an application to JBoss EAP, the elytron-oidc-client subsystem scans the deployment to detect if the OIDC authentication mechanism is required. If the subsystem detects OIDC configuration for the deployment in either the elytron-oidc-client subsystem or the application deployment descriptor, JBoss EAP enables the OIDC authentication mechanism for the application. If the subsystem detects OIDC configuration in both places, the configuration in the elytron-oidc-client subsystem secure-deployment attribute takes precedence over the configuration in the application deployment descriptor. Additional resources OpenID Connect specification OpenID Connect Libraries Securing applications using OpenID Connect with Red Hat build of Keycloak 3.5.2. Creating an application secured with OpenID Connect For creating a web-application, create a Maven project with the required dependencies and the directory structure. Create a web application containing a servlet that returns the user name obtained from the logged-in user's principal and attributes. If there is no logged-in user, the servlet returns the text "NO AUTHENTICATED USER". Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . Procedure Set up a Maven project using the mvn command. The command creates the directory structure for the project and the pom.xml configuration file. Syntax Example Navigate to the application root directory: Syntax Example Replace the content of the generated pom.xml file with the following text: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <version.maven.war.plugin>3.3.2</version.maven.war.plugin> <version.eap.plugin>1.0.0.Final-redhat-00014</version.eap.plugin> <version.server>8.0.0.GA-redhat-00009</version.server> <version.bom.ee>USD{version.server}</version.bom.ee> </properties> <repositories> <repository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga/</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga/</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee-with-tools</artifactId> <version>USD{version.bom.ee}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.wildfly.security</groupId> <artifactId>wildfly-elytron-auth-server</artifactId> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.maven.war.plugin}</version> </plugin> <plugin> <groupId>org.jboss.eap.plugins</groupId> <artifactId>eap-maven-plugin</artifactId> <version>USD{version.eap.plugin}</version> <configuration> <channels> <channel> <manifest> <groupId>org.jboss.eap.channels</groupId> <artifactId>eap-8.0</artifactId> </manifest> </channel> </channels> <feature-packs> <feature-pack> <location>org.jboss.eap:wildfly-ee-galleon-pack</location> </feature-pack> <feature-pack> <location>org.jboss.eap.cloud:eap-cloud-galleon-pack</location> </feature-pack> </feature-packs> <layers> <layer>cloud-server</layer> <layer>elytron-oidc-client</layer> </layers> <galleon-options> <jboss-fork-embedded>true</jboss-fork-embedded> </galleon-options> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Note <version.eap.plugin>1.0.0.Final-redhat-00014</version.eap.plugin> is an example version of JBoss EAP Maven plugin. See the Red Hat Maven repository for more information on JBoss EAP Maven plugin releases: https://maven.repository.redhat.com/earlyaccess/all/org/jboss/eap/plugins/eap-maven-plugin/ . Create a directory to store the Java files. Syntax Example Navigate to the new directory. Syntax Example Create a file SecuredServlet.java with the following content: package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import java.util.ArrayList; import java.util.Collection; import java.util.Iterator; import java.util.List; import java.util.Set; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import org.wildfly.security.auth.server.SecurityDomain; import org.wildfly.security.auth.server.SecurityIdentity; import org.wildfly.security.authz.Attributes; import org.wildfly.security.authz.Attributes.Entry; /** * A simple secured HTTP servlet. It returns the user name and * attributes obtained from the logged-in user's Principal. If * there is no logged-in user, it returns the text * "NO AUTHENTICATED USER". */ @WebServlet("/secured") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { Principal user = req.getUserPrincipal(); SecurityIdentity identity = SecurityDomain.getCurrent().getCurrentSecurityIdentity(); Attributes identityAttributes = identity.getAttributes(); Set <String> keys = identityAttributes.keySet(); String attributes = "<ul>"; for (String attr : keys) { attributes += "<li> " + attr + " : " + identityAttributes.get(attr).toString() + "</li>"; } attributes+="</ul>"; writer.println("<html>"); writer.println(" <head><title>Secured Servlet</title></head>"); writer.println(" <body>"); writer.println(" <h1>Secured Servlet</h1>"); writer.println(" <p>"); writer.print(" Current Principal '"); writer.print(user != null ? user.getName() : "NO AUTHENTICATED USER"); writer.print("'"); writer.print(user != null ? "\n" + attributes : ""); writer.println(" </p>"); writer.println(" </body>"); writer.println("</html>"); } } } Configure the application's web.xml to protect the application resources. Example <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" metadata-complete="false"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Users</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>OIDC</auth-method> </login-config> <security-role> <role-name>*</role-name> </security-role> </web-app> In this example, only the users with the role Users can access the application. 3.5.3. Deploying the application on OpenShift As a JBoss EAP application developer, you can deploy your applications on OpenShift that uses the OpenID Connect subsystem and integrate it with a Red Hat build of Keycloak server. Deploy your application by following the procedures below. Prerequisites You have configured the Red Hat build of Keycloak server in your OpenShift with the following configuration. For more information, see Red Hat build of Keycloak Operator . Create a realm called JBossEAP . Create a user called demo . Set a password for the user called demo . Toggle Temporary to OFF and click Set Password . In the confirmation prompt, click Set password . Create a role called Users . Assign the role Users to the user demo . In the Client Roles field, select the realm-management you configured for JBoss EAP. Assign the role create-client to the client realm-management . Procedure Deploy your application code to Git Repository. Create a secret containing the OIDC configuration. Create a file named oidc-secret.yaml using the following content: Use the following command to create a secret: Create a file named helm.yaml using the following content: Deploy the example application using JBoss EAP Helm charts: Add the environment variables to the oidc-secret.yaml file to configure the OIDC provider URL and application hostname. The value for OIDC_HOSTNAME_HTTPS corresponds to the following output: The value for OIDC_PROVIDER_URL corresponds to the following output: A route discovery attempt is made if OIDC_HOSTNAME_HTTP(S) is not set. To enable route discovery, the OpenShift user must be able to list the route resources. For example, to create and associate the routeview role with the view user, use the following oc command: Update the secret with oc apply -f oidc-secret.yaml . Deploy the application again to ensure OpenShift uses the new environment variables: Verification In your browser, navigate to https://<eap-oidc-test-app route>/ . You will be redirected to Red Hat build of Keycloak login page. Access the secured servlet. Log in with the following credentials: A page appears that contains the Principal ID. 3.5.4. Environment variable based configuration Use these environment variables to configure JBoss EAP OIDC support on OpenShift image. Table 3.1. Environment Variables Environment variable Legacy SSO environment variable Description Required Default Value OIDC_PROVIDER_NAME NONE. When SSO_* environment variable are used, "rh-sso" name is internally set. You must set to rh-sso when using OIDC_PROVIDER_NAME variable. Yes OIDC_PROVIDER_URL USDSSO_URL/realms/USDSSO_REALM The URL of the provider. Yes OIDC_USER_NAME SSO_USERNAME Dynamic client registration requires the username to receive a token. Yes OIDC_USER_PASSWORD SSO_PASSWORD Dynamic client registration requires the user password to receive a token. Yes OIDC_SECURE_DEPLOYMENT_SECRET SSO_SECRET It is known to both the secure-deployment subsystem and the authentication server client. No OIDC_SECURE_DEPLOYMENT_PRINCIPAL_ATTRIBUTE SSO_PRINCIPAL_ATTRIBUTE Configure the value of the principal name. No Defaults to sub (ID token) for rh-sso . Typical value: preferred_username. OIDC_SECURE_DEPLOYMENT_ENABLE_CORS SSO_ENABLE_CORS Enable CORS for Single Sign-On applications. No Defaults to False . OIDC_SECURE_DEPLOYMENT_BEARER_ONLY SSO_BEARER_ONLY Deployment that accepts only bearer token and does not support logging. No Defaults to False . OIDC_PROVIDER_SSL_REQUIRED NONE Defaults to external, such as private and local address, but does not support https. No External OIDC_PROVIDER_TRUSTSTORE SSO_TRUSTSTORE Specify the realm trustore file. If it is not set, the adapter cannot use a trust manager when processing HTTPS requests. No OIDC_PROVIDER_TRUSTSTORE_DIR SSO_TRUSTSTORE_DIR Directory to find the realm truststore . If it is not set, the adapter cannot use a trust manager when processing HTTPS requests. No OIDC_PROVIDER_TRUSTSTORE_PASSWORD SSO_TRUSTSTORE_PASSWORD Specify the realm truststore password. If it is not set, the adapter cannot use a trust manager when processing HTTPS requests. No OIDC_PROVIDER_TRUSTSTORE_CERTIFICATE_ALIAS SSO_TRUSTSTORE_CERTIFICATE_ALIAS Specify the realm trustore alias. It is required to interact with the authentication server to register a client. No OIDC_DISABLE_SSL_CERTIFICATE_VALIDATION SSO_DISABLE_SSL_CERTIFICATE_VALIDATION Disable certificate validation when interacting with the authentication server to register a client. No OIDC_HOSTNAME_HTTP HOSTNAME_HTTP Hostname used for unsecure routes. No Routes are discovered. OIDC_HOSTNAME_HTTPS HOSTNAME_HTTPS Hostname used for secured routes. No Secured routes are discovered. NONE SSO_PUBLIC_KEY Public key of the Single Sign-On realm. This option is not used, public key is automatically retrieved by the OIDC subsystem. No If set, a warning is displayed that this option is being ignored. 3.6. Securing applications by using SAML The Security Assertion Markup Language (SAML) serves as a data format and protocol that enables the exchange of authentication and authorization information between two parties. These two parties typically include an identity provider and a service provider. This information takes the form of SAML tokens containing assertions. Identity providers issue these SAML tokens to subjects to enable these subjects to authenticate with service providers. Subjects can reuse SAML tokens with multiple service providers, which enables browser-based Single Sign-On in SAML v2. You can secure web applications by using the Galleon layers that the Keycloak SAML adapter feature pack provides. For information about the Keycloak SAML adapter feature pack, see Keycloak SAML adapter feature pack for securing applications by using SAML . 3.6.1. Keycloak SAML adapter feature pack for securing applications by using SAML Keycloak SAML adapter Galleon pack is a Galleon feature pack that includes the keycloak-saml layer. Use the keycloak-saml layer in the feature pack to install the necessary modules and configurations in JBoss EAP. These modules and configurations are required if you want to use Red Hat build of Keycloak as an identity provider for Single Sign-On (SSO) when using SAML. When using the keycloak-saml SAML adapter Galleon layer for source-to-image (S2I), you can optionally use the SAML client feature that enables automatic registration with an Identity Service Provider (IDP), such as Red Hat build of Keycloak. 3.6.2. Configuring Red Hat build of Keycloak as SAML provider for OpenShift Red Hat build of Keycloak is an identity and access management provider for securing web applications with Single Sign-On (SSO). It supports OpenID Connect, which is an extension to OAuth 2.0, and SAML. The following procedure outlines the essential steps needed to secure applications with SAML. For more information, see Red Hat build of Keycloak documentation . Prerequisites You have administrator access to Red Hat build of Keycloak. Red Hat build of Keycloak is running. For more information, see Red Hat build of Keycloak Operator . You used the oc login command to log in to OpenShift. Procedure Create a Single Sign-On realm, users, and roles . Generate the key and certificate by using the Java keytool command: keytool -genkeypair -alias saml-app -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore keystore.p12 -storepass password -dname "CN=saml-basic-auth,OU=EAP SAML Client,O=Red Hat EAP QE,L=MB,S=Milan,C=IT" -ext ku:c=dig,keyEncipherment -validity 365 Import the keystore into a Java KeyStore (JKS) format: keytool -importkeystore -deststorepass password -destkeystore keystore.jks -srckeystore keystore.p12 -srcstoretype PKCS12 -srcstorepass password Create a secret in OpenShift for the keystore: USD oc create secret generic saml-app-secret --from-file=keystore.jks=./keystore.jks --type=opaque Note These steps are only necessary when using the automatic SAML client registration feature. When JBoss EAP registers a new SAML client into Red Hat build of Keycloak as the client-admin user, JBoss EAP must store the certificate of the new SAML client in the Red Hat build of Keycloak client configuration. This allows JBoss EAP to retain the private key while only storing the public certificate in Red Hat build of Keycloak, which establishes an authenticated client for communication with Red Hat build of Keycloak. 3.6.3. Creating an application secured with SAML You can enhance web application security by using the Security Assertion Markup Language (SAML). SAML provides effective user authentication and authorization, along with Single Sign-On (SSO) capabilities, making it a dependable choice for strengthening web applications. Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . Procedure Set up a Maven project by using the mvn command. This command creates both the directory structure for the project and the pom.xml configuration file. Syntax Example Navigate to the application root directory: Syntax Example Replace the content of the generated pom.xml file with the following text: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <version.maven.war.plugin>3.3.2</version.maven.war.plugin> <version.eap.plugin>1.0.0.Final-redhat-00014</version.eap.plugin> <version.server>8.0.0.GA-redhat-00009</version.server> <version.bom.ee>USD{version.server}</version.bom.ee> </properties> <repositories> <repository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga/</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga/</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee-with-tools</artifactId> <version>USD{version.bom.ee}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.wildfly.security</groupId> <artifactId>wildfly-elytron-auth-server</artifactId> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.maven.war.plugin}</version> </plugin> <plugin> <groupId>org.jboss.eap.plugins</groupId> <artifactId>eap-maven-plugin</artifactId> <version>USD{version.eap.plugin}</version> <configuration> <channels> <channel> <manifest> <groupId>org.jboss.eap.channels</groupId> <artifactId>eap-8.0</artifactId> </manifest> </channel> </channels> <feature-packs> <feature-pack> <location>org.jboss.eap:wildfly-ee-galleon-pack</location> </feature-pack> <feature-pack> <location>org.jboss.eap.cloud:eap-cloud-galleon-pack</location> </feature-pack> <feature-pack> <location>org.keycloak:keycloak-saml-adapter-galleon-pack</location> </feature-pack> </feature-packs> <layers> <layer>cloud-server</layer> <layer>keycloak-saml</layer> </layers> <galleon-options> <jboss-fork-embedded>true</jboss-fork-embedded> </galleon-options> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Note <version.eap.plugin>1.0.0.Final-redhat-00014</version.eap.plugin> is an example version of JBoss EAP Maven plugin. See the Red Hat Maven repository for more information on JBoss EAP Maven plugin releases: https://maven.repository.redhat.com/earlyaccess/all/org/jboss/eap/plugins/eap-maven-plugin/ . Create a directory to store the Java files. Syntax Example Navigate to the new directory. Syntax Example Create a file named SecuredServlet.java that contains the following settings: package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import java.util.Set; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import org.wildfly.security.auth.server.SecurityDomain; import org.wildfly.security.auth.server.SecurityIdentity; import org.wildfly.security.authz.Attributes; /** * A simple secured HTTP servlet. It returns the user name and * attributes obtained from the logged-in user's Principal. If * there is no logged-in user, it returns the text * "NO AUTHENTICATED USER". */ @WebServlet("/secured") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { Principal user = req.getUserPrincipal(); SecurityIdentity identity = SecurityDomain.getCurrent().getCurrentSecurityIdentity(); Attributes identityAttributes = identity.getAttributes(); Set <String> keys = identityAttributes.keySet(); String attributes = "<ul>"; for (String attr : keys) { attributes += "<li> " + attr + " : " + identityAttributes.get(attr).toString() + "</li>"; } attributes+="</ul>"; writer.println("<html>"); writer.println(" <head><title>Secured Servlet</title></head>"); writer.println(" <body>"); writer.println(" <h1>Secured Servlet</h1>"); writer.println(" <p>"); writer.print(" Current Principal '"); writer.print(user != null ? user.getName() : "NO AUTHENTICATED USER"); writer.print("'"); writer.print(user != null ? "\n" + attributes : ""); writer.println(" </p>"); writer.println(" </body>"); writer.println("</html>"); } } } Create the directory structure for the web.xml file: Configure the application's web.xml file to protect the application resources. Example <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" metadata-complete="false"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK-SAML</auth-method> </login-config> <security-role> <role-name>user</role-name> </security-role> </web-app> In this example, only users with the user role can access the application. Verification After creating the application, commit it to a remote Git repository. Create a Git repository such as https://github.com/your-username/simple-webapp-example . For more information about remote repositories and Git, see Getting started with Git - About remote repositories . From the root folder of the application, run the following Git commands: git init -b main git add pom.xml src git commit -m "First commit" git remote add origin [email protected]:your-username/simple-webapp-example.git git remote -v git push -u origin main These steps commit your application to the remote repository, making it accessible online. 3.6.4. Building and deploying a SAML-secured application on OpenShift You can build and deploy your application secured with SAML on OpenShift by using the JBoss EAP and Single Sign-On (SSO) Galleon layers. Prerequisites You have installed Helm. For more information, see Installing Helm . You have created the SAML application project and made it accessible in a Git repository. You have installed the repository for the JBoss EAP Helm charts by entering this command in the management CLI: USD helm repo add jboss-eap https://jbossas.github.io/eap-charts/ Procedure Deploy your application code to the Git Repository. Create an OpenShift secret containing the required environment variables: apiVersion: v1 kind: Secret metadata: name: saml-secret type: Opaque stringData: SSO_REALM: "saml-basic-auth" SSO_USERNAME: "client-admin" SSO_PASSWORD: "client-admin" SSO_SAML_CERTIFICATE_NAME: "saml-app" SSO_SAML_KEYSTORE: "keystore.jks" SSO_SAML_KEYSTORE_PASSWORD: "password" SSO_SAML_KEYSTORE_DIR: "/etc/sso-saml-secret-volume" SSO_SAML_LOGOUT_PAGE: "/simple-webapp-example" SSO_DISABLE_SSL_CERTIFICATE_VALIDATION: "true" Save the provided YAML content to a file, such as saml-secret.yaml . Apply the saved YAML file by using the following command: oc apply -f saml-secret.yaml Create a file named helm.yaml that contains the following settings: build: uri: [WEB ADDRESS TO YOUR GIT REPOSITORY] deploy: volumes: - name: saml-keystore-volume secret: secretName: saml-app-secret volumeMounts: - name: saml-keystore-volume mountPath: /etc/sso-saml-secret-volume readOnly: true envFrom: - secretRef: name: saml-secret Note Specify the web address in the HTTP format, such as http://www.redhat.com . If you are using a maven mirror, specify the web address as follows: build: uri: [WEB ADDRESS TO YOUR GIT REPOSITORY] env: - name: "MAVEN_MIRROR_URL" value: "http://..." Deploy the example application by using JBoss EAP Helm charts: USD helm install saml-app -f helm.yaml jboss-eap/eap8 Add the environment variables to the saml-secret.yaml file to configure the Keycloak server URL and application route: stringData: ... HOSTNAME_HTTPS: <saml-app application route> SSO_URL: https://<host of the Keycloak server> Replace <saml-app application route> and <host of the Keycloak server> with the appropriate values. The value for HOSTNAME_HTTPS corresponds to the following output: echo USD(oc get route saml-app --template='{{ .spec.host }}') The value for SSO_URL corresponds to the following output: echo https://USD(oc get route sso --template='{{ .spec.host }}') Note If you cannot use this command, use oc get routes to list the available routes and select the route to your Red Hat build of Keycloak instance. Update the secret with oc apply -f saml-secret.yaml . Verification Deploy the application again to ensure that OpenShift uses the new environment variables: USD oc rollout restart deploy saml-app In a browser, navigate to the application URL. For example, https://<saml-app route>/simple-webapp-example . You are redirected to the Red Hat build of Keycloak login page. To get the web address, use the following command to access the secured servlet: echo https://USD(oc get route saml-app --template='{{ .spec.host }}')/simple-webapp-example/secured Log in with the following credentials: username: demo password: demo A page is displayed that contains the Principal ID. Your application is now secured using SAML. 3.6.5. Creating a SSO realm, users, and roles You can configure a Single Sign-On (SSO) realm, define user roles, and manage access control in your Red Hat build of Keycloak environment. These actions enable you to enhance security and simplify user access management, ensuring a streamlined authentication experience. This is essential for optimizing your SSO setup and improving user authentication processes. Prerequisites You have administrator access to Red Hat build of Keycloak. Red Hat build of Keycloak is running. Procedure Log in to the Red Hat build of Keycloak admin console using the URL: https://<SSO route>/ . Create a realm in Red Hat build of Keycloak; for example, saml-basic-auth . You can subsequently use this realm to create the required users, roles, and a client. For more information, see Creating a realm . Create a role within the saml-basic-auth realm. For example, user . For more information, see Creating a realm role . Create a user. For example, demo . For more information, see Creating users . Create a password for the user. For example, demo . Ensure that the password is not temporary. For more information, see Setting a password for a user . Assign the user role to the demo user for login access. For more information, see Assigning role mappings . Create a user. For example, client-admin . To create the SAML client in the Keycloak server when the JBoss EAP server starts, you can use the client-admin user, which requires additional privileges. For more information, see Creating users . Create a password for the user. For example, client-admin . Ensure that the password is not temporary. For more information, see Setting a password for a user . Select realm-management from the Client Roles drop down list. Assign the roles create-client , manage-clients , and manage-realm to the client-admin user. For more information, see Assigning role mappings . 3.6.6. Environment variables for configuring the SAML subsystem You can optimize the integration of the Keycloak server within your environment by understanding and using the following variables. This ensures a seamless and secure Keycloak setup for your application. Table 3.2. Environment variables Environment variable Description Required APPLICATION_NAME Used as a prefix for the client name, derived from the deployment name. Optional HOSTNAME_HTTP Custom hostname for the HTTP OpenShift route. If not set, route discovery is performed. Optional HOSTNAME_HTTPS Custom hostname for the HTTPS OpenShift route. If not set, route discovery is performed. Optional SSO_DISABLE_SSL_CERTIFICATE_VALIDATION Choose between true or false to enable or disable validation of the Keycloak server certificate. Consider setting this to true when the SSO server generates a self-signed certificate. Optional SSO_PASSWORD The password for a user with privileges to interact with the Keycloak realm and to create and register clients. For example, client-admin . True SSO_REALM The SSO realm for associating application clients. For example, saml-basic-auth . Optional SSO_SAML_CERTIFICATE_NAME Alias of private key and certificate in the SAML client keystore. For example, saml-app . True SSO_SAML_KEYSTORE Name of the keystore file. For example, keystore.jks . True SSO_SAML_KEYSTORE_DIR Directory that contains the client keystore. For example, /etc/sso-saml-secret-volume . True SSO_SAML_KEYSTORE_PASSWORD Keystore password. For example, password . True SSO_SAML_LOGOUT_PAGE Logout page. For example, simple-webapp-example . True SSO_SAML_VALIDATE_SIGNATURE Specify true to validate the signature or false to not validate it. True by default. Optional SSO_SECURITY_DOMAIN The name of the security domain used to secure undertow and ejb subsystems. The default is keycloak . Optional SSO_TRUSTSTORE The truststore file name containing the server certificate. Optional SSO_TRUSTSTORE_CERTIFICATE_ALIAS Certificate alias within the truststore. Optional SSO_TRUSTSTORE_DIR Directory that contains the truststore. Optional SSO_TRUSTSTORE_PASSWORD The password for the truststore and certificate . For example, mykeystorepass . Optional SSO_URL The URL for the SSO server. For example, <SSO server accessible route> . True SSO_USERNAME The username of a user with privileges to interact with the Keycloak realm and to create and register clients. For example, client-admin . True 3.6.7. Route discovery in JBoss EAP server You can optimize your server's performance and simplify route configurations in your specified namespace by using the route discovery feature in the JBoss EAP server. This feature is essential for improving server efficiency to provide a smoother operational experience, particularly when the HOSTNAME_HTTPS variable is unspecified. If the HOSTNAME_HTTPS variable is not set, the JBoss EAP server automatically attempts route discovery. To enable route discovery, you must create the required permissions: oc create role routeview --verb=list --resource=route -n YOUR_NAME_SPACE oc policy add-role-to-user routeview system:serviceaccount:YOUR_NAME_SPACE:default --role-namespace=YOUR_NAME_SPACE -n YOUR_NAME_SPACE 3.6.8. Additional resources Red Hat build of Keycloak Server Administration Guide 3.7. Additional resources OpenShift Container Platform Getting Started | [
"oc new-project <project_name>",
"oc new-project eap-demo",
"keytool -genkey -keyalg RSA -alias <alias_name> -keystore <keystore_filename.jks> -validity 360 -keysize 2048",
"keytool -genkey -keyalg RSA -alias eapdemo-selfsigned -keystore keystore.jks -validity 360 -keysize 2048",
"oc create secret generic <secret_name> --from-file= <keystore_filename.jks>",
"oc create secret generic eap-app-secret --from-file=keystore.jks",
"helm repo add jboss-eap https://jbossas.github.io/eap-charts/",
"build: uri: https://github.com/jboss-developer/jboss-eap-quickstarts.git ref: EAP_8.0.0.GA contextDir: helloworld deploy: replicas: 1",
"helm install helloworld -f helm.yaml jboss-eap/eap8",
"curl https://USD(oc get route helloworld --template='{{ .spec.host }}')/HelloWorld",
"Use EAP 8 Builder image to create a JBoss EAP 8 server with its default configuration FROM registry.redhat.io/jboss-eap-8/eap8-openjdk17-builder-openshift-rhel8:latest AS builder Set up environment variables for provisioning. 1 ENV GALLEON_PROVISION_FEATURE_PACKS org.jboss.eap:wildfly-ee-galleon-pack,org.jboss.eap.cloud:eap-cloud-galleon-pack ENV GALLEON_PROVISION_LAYERS cloud-default-config Specify the JBoss EAP version 2 ENV GALLEON_PROVISION_CHANNELS org.jboss.eap.channels:eap-8.0 Run the assemble script to provision the server. RUN /usr/local/s2i/assemble Copy the JBoss EAP 8 server from the builder image to the runtime image. FROM registry.redhat.io/jboss-eap-8/eap8-openjdk17-runtime-openshift-rhel8:latest AS runtime Set appropriate ownership and permissions. COPY --from=builder --chown=jboss:root USDJBOSS_HOME USDJBOSS_HOME Steps to add: (1) COPY the WAR/EAR to USDJBOSS_HOME/standalone/deployments with the jboss:root user. For example: COPY --chown=jboss:root my-app.war USDJBOSS_HOME/standalone/deployments 3 (2) (optional) server modification. You can modify EAP server configuration: # * invoke management operations. For example # RUN USDJBOSS_HOME/bin/jboss-cli.sh --commands=\"embed-server,/system-property=Foo:add(value=Bar)\" # First operation must always be embed-server. # * copy a modified standalone.xml in USDJBOSS_HOME/standalone/configuration/ for example # COPY --chown=jboss:root standalone.xml USDJBOSS_HOME/standalone/configuration Ensure appropriate permissions for the copied files. RUN chmod -R ug+rwX USDJBOSS_HOME",
"COPY --chown=jboss:root <my-app.war> USDJBOSS_HOME/standalone/deployments",
"podman build -t my-app .",
"mvn archetype:generate -DgroupId= USD{group-to-which-your-application-belongs} -DartifactId= USD{name-of-your-application} -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"mvn archetype:generate -DgroupId=com.example.app -DartifactId=simple-webapp-example -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"cd <name-of-your-application>",
"cd simple-webapp-example",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <version.maven.war.plugin>3.3.2</version.maven.war.plugin> <version.eap.plugin>1.0.0.Final-redhat-00014</version.eap.plugin> <version.server>8.0.0.GA-redhat-00009</version.server> <version.bom.ee>USD{version.server}</version.bom.ee> </properties> <repositories> <repository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga/</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga/</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee-with-tools</artifactId> <version>USD{version.bom.ee}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.wildfly.security</groupId> <artifactId>wildfly-elytron-auth-server</artifactId> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.maven.war.plugin}</version> </plugin> <plugin> <groupId>org.jboss.eap.plugins</groupId> <artifactId>eap-maven-plugin</artifactId> <version>USD{version.eap.plugin}</version> <configuration> <channels> <channel> <manifest> <groupId>org.jboss.eap.channels</groupId> <artifactId>eap-8.0</artifactId> </manifest> </channel> </channels> <feature-packs> <feature-pack> <location>org.jboss.eap:wildfly-ee-galleon-pack</location> </feature-pack> <feature-pack> <location>org.jboss.eap.cloud:eap-cloud-galleon-pack</location> </feature-pack> </feature-packs> <layers> <layer>cloud-server</layer> <layer>elytron-oidc-client</layer> </layers> <galleon-options> <jboss-fork-embedded>true</jboss-fork-embedded> </galleon-options> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project>",
"mkdir -p src/main/java/<path_based_on_artifactID>",
"mkdir -p src/main/java/com/example/app",
"cd src/main/java/<path_based_on_artifactID>",
"cd src/main/java/com/example/app",
"package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import java.util.ArrayList; import java.util.Collection; import java.util.Iterator; import java.util.List; import java.util.Set; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import org.wildfly.security.auth.server.SecurityDomain; import org.wildfly.security.auth.server.SecurityIdentity; import org.wildfly.security.authz.Attributes; import org.wildfly.security.authz.Attributes.Entry; /** * A simple secured HTTP servlet. It returns the user name and * attributes obtained from the logged-in user's Principal. If * there is no logged-in user, it returns the text * \"NO AUTHENTICATED USER\". */ @WebServlet(\"/secured\") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { Principal user = req.getUserPrincipal(); SecurityIdentity identity = SecurityDomain.getCurrent().getCurrentSecurityIdentity(); Attributes identityAttributes = identity.getAttributes(); Set <String> keys = identityAttributes.keySet(); String attributes = \"<ul>\"; for (String attr : keys) { attributes += \"<li> \" + attr + \" : \" + identityAttributes.get(attr).toString() + \"</li>\"; } attributes+=\"</ul>\"; writer.println(\"<html>\"); writer.println(\" <head><title>Secured Servlet</title></head>\"); writer.println(\" <body>\"); writer.println(\" <h1>Secured Servlet</h1>\"); writer.println(\" <p>\"); writer.print(\" Current Principal '\"); writer.print(user != null ? user.getName() : \"NO AUTHENTICATED USER\"); writer.print(\"'\"); writer.print(user != null ? \"\\n\" + attributes : \"\"); writer.println(\" </p>\"); writer.println(\" </body>\"); writer.println(\"</html>\"); } } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"2.5\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\" metadata-complete=\"false\"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Users</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>OIDC</auth-method> </login-config> <security-role> <role-name>*</role-name> </security-role> </web-app>",
"apiVersion: v1 kind: Secret metadata: name: oidc-secret type: Opaque stringData: OIDC_PROVIDER_NAME: rh-sso OIDC_USER_NAME: demo OIDC_USER_PASSWORD: demo OIDC_SECURE_DEPLOYMENT_SECRET: mysecret",
"oc apply -f oidc-secret.yaml",
"build: uri: [URL TO YOUR GIT REPOSITORY] deploy: envFrom: - secretRef: name: oidc-secret",
"helm install eap-oidc-test-app -f helm.yaml jboss-eap/eap8",
"yaml stringData: OIDC_HOSTNAME_HTTPS: <host of the application> OIDC_PROVIDER_URL: https://<host of the SSO provider>/realms/JBossEAP",
"echo USD(oc get route eap-oidc-test-app --template='{{ .spec.host }}')",
"echo https://USD(oc get route sso --template='{{ .spec.host }}')/realms/JBossEAP",
"oc create role <role-name> --verb=list --resource=route oc adm policy add-role-to-user <role-name> <user-name> --role-namespace=<your namespace>",
"oc rollout restart deploy eap-oidc-test-app",
"username: demo password: demo",
"keytool -genkeypair -alias saml-app -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore keystore.p12 -storepass password -dname \"CN=saml-basic-auth,OU=EAP SAML Client,O=Red Hat EAP QE,L=MB,S=Milan,C=IT\" -ext ku:c=dig,keyEncipherment -validity 365",
"keytool -importkeystore -deststorepass password -destkeystore keystore.jks -srckeystore keystore.p12 -srcstoretype PKCS12 -srcstorepass password",
"oc create secret generic saml-app-secret --from-file=keystore.jks=./keystore.jks --type=opaque",
"mvn archetype:generate -DgroupId= USD{group-to-which-your-application-belongs} -DartifactId= USD{name-of-your-application} -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"mvn archetype:generate -DgroupId=com.example.app -DartifactId=simple-webapp-example -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"cd <name-of-your-application>",
"cd simple-webapp-example",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <version.maven.war.plugin>3.3.2</version.maven.war.plugin> <version.eap.plugin>1.0.0.Final-redhat-00014</version.eap.plugin> <version.server>8.0.0.GA-redhat-00009</version.server> <version.bom.ee>USD{version.server}</version.bom.ee> </properties> <repositories> <repository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga/</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga/</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee-with-tools</artifactId> <version>USD{version.bom.ee}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.wildfly.security</groupId> <artifactId>wildfly-elytron-auth-server</artifactId> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.maven.war.plugin}</version> </plugin> <plugin> <groupId>org.jboss.eap.plugins</groupId> <artifactId>eap-maven-plugin</artifactId> <version>USD{version.eap.plugin}</version> <configuration> <channels> <channel> <manifest> <groupId>org.jboss.eap.channels</groupId> <artifactId>eap-8.0</artifactId> </manifest> </channel> </channels> <feature-packs> <feature-pack> <location>org.jboss.eap:wildfly-ee-galleon-pack</location> </feature-pack> <feature-pack> <location>org.jboss.eap.cloud:eap-cloud-galleon-pack</location> </feature-pack> <feature-pack> <location>org.keycloak:keycloak-saml-adapter-galleon-pack</location> </feature-pack> </feature-packs> <layers> <layer>cloud-server</layer> <layer>keycloak-saml</layer> </layers> <galleon-options> <jboss-fork-embedded>true</jboss-fork-embedded> </galleon-options> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project>",
"mkdir -p src/main/java/<path_based_on_artifactID>",
"mkdir -p src/main/java/com/example/app",
"cd src/main/java/<path_based_on_artifactID>",
"cd src/main/java/com/example/app",
"package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import java.util.Set; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import org.wildfly.security.auth.server.SecurityDomain; import org.wildfly.security.auth.server.SecurityIdentity; import org.wildfly.security.authz.Attributes; /** * A simple secured HTTP servlet. It returns the user name and * attributes obtained from the logged-in user's Principal. If * there is no logged-in user, it returns the text * \"NO AUTHENTICATED USER\". */ @WebServlet(\"/secured\") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { Principal user = req.getUserPrincipal(); SecurityIdentity identity = SecurityDomain.getCurrent().getCurrentSecurityIdentity(); Attributes identityAttributes = identity.getAttributes(); Set <String> keys = identityAttributes.keySet(); String attributes = \"<ul>\"; for (String attr : keys) { attributes += \"<li> \" + attr + \" : \" + identityAttributes.get(attr).toString() + \"</li>\"; } attributes+=\"</ul>\"; writer.println(\"<html>\"); writer.println(\" <head><title>Secured Servlet</title></head>\"); writer.println(\" <body>\"); writer.println(\" <h1>Secured Servlet</h1>\"); writer.println(\" <p>\"); writer.print(\" Current Principal '\"); writer.print(user != null ? user.getName() : \"NO AUTHENTICATED USER\"); writer.print(\"'\"); writer.print(user != null ? \"\\n\" + attributes : \"\"); writer.println(\" </p>\"); writer.println(\" </body>\"); writer.println(\"</html>\"); } } }",
"mkdir -p src/main/webapp/WEB-INF cd src/main/webapp/WEB-INF",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"2.5\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\" metadata-complete=\"false\"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK-SAML</auth-method> </login-config> <security-role> <role-name>user</role-name> </security-role> </web-app>",
"git init -b main git add pom.xml src git commit -m \"First commit\" git remote add origin [email protected]:your-username/simple-webapp-example.git git remote -v git push -u origin main",
"helm repo add jboss-eap https://jbossas.github.io/eap-charts/",
"apiVersion: v1 kind: Secret metadata: name: saml-secret type: Opaque stringData: SSO_REALM: \"saml-basic-auth\" SSO_USERNAME: \"client-admin\" SSO_PASSWORD: \"client-admin\" SSO_SAML_CERTIFICATE_NAME: \"saml-app\" SSO_SAML_KEYSTORE: \"keystore.jks\" SSO_SAML_KEYSTORE_PASSWORD: \"password\" SSO_SAML_KEYSTORE_DIR: \"/etc/sso-saml-secret-volume\" SSO_SAML_LOGOUT_PAGE: \"/simple-webapp-example\" SSO_DISABLE_SSL_CERTIFICATE_VALIDATION: \"true\"",
"apply -f saml-secret.yaml",
"build: uri: [WEB ADDRESS TO YOUR GIT REPOSITORY] deploy: volumes: - name: saml-keystore-volume secret: secretName: saml-app-secret volumeMounts: - name: saml-keystore-volume mountPath: /etc/sso-saml-secret-volume readOnly: true envFrom: - secretRef: name: saml-secret",
"build: uri: [WEB ADDRESS TO YOUR GIT REPOSITORY] env: - name: \"MAVEN_MIRROR_URL\" value: \"http://...\"",
"helm install saml-app -f helm.yaml jboss-eap/eap8",
"stringData: HOSTNAME_HTTPS: <saml-app application route> SSO_URL: https://<host of the Keycloak server>",
"echo USD(oc get route saml-app --template='{{ .spec.host }}')",
"echo https://USD(oc get route sso --template='{{ .spec.host }}')",
"oc rollout restart deploy saml-app",
"echo https://USD(oc get route saml-app --template='{{ .spec.host }}')/simple-webapp-example/secured",
"username: demo password: demo",
"create role routeview --verb=list --resource=route -n YOUR_NAME_SPACE policy add-role-to-user routeview system:serviceaccount:YOUR_NAME_SPACE:default --role-namespace=YOUR_NAME_SPACE -n YOUR_NAME_SPACE"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_on_openshift_container_platform/assembly_building-and-running-jboss-eap-applicationson-openshift-container-platform_default |
Chapter 3. Registering and connecting systems to Red Hat Insights to execute tasks | Chapter 3. Registering and connecting systems to Red Hat Insights to execute tasks To work with Red Hat Insights, you need to register systems to Insights, and enable system communication with Insights. In addition to communicating with Insights, you need to enable and install dependencies on Satellite 6.11+, Remote Host Configuration (rhc), rhc-worker-playbook and ansible , so that you can use task services, and other services in the Automation Toolkit. For more information about enabling system communication with Insights, and addressing dependencies, see: Enabling host communication with Insights in the Red Hat Insights Remediations Guide . Additional resources Red Hat Insights data and application security | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks/register-connect-for-tasks_overview-tasks |
Appendix A. Fuse Console Configuration Properties | Appendix A. Fuse Console Configuration Properties By default, the Fuse Console configuration is defined in the hawtconfig.json file. You can customize the Fuse Console configuration information, such as title, logo, and login page information. Table A.1, "Fuse Console Configuration Properties" provides a description of the properties and lists whether or not each property requires a value. Table A.1. Fuse Console Configuration Properties Section Property Name Default Value Description Required? About Title Red Hat Fuse Management Console The title that shows on the About page of the Fuse Console. Required productInfo Empty value Product information that shows on the About page of the Fuse Console. Optional additionalInfo Empty value Any additional information that shows on the About page of the Fuse Console. Optional | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_springboot_standalone/r_fuse-console-configuration |
5.30. conman | 5.30. conman 5.30.1. RHEA-2012:0401 - conman enhancement update An updated conman package that adds one enhancement is now available for Red Hat Enterprise Linux 6. ConMan is a serial console management program designed to support a large number of console devices and simultaneous users. ConMan currently supports local serial devices and remote terminal servers. Enhancement BZ# 738967 Users are now able to configure the maximum number of open files. This allows the conman daemon to easily manage a large number of nodes. All users of conman are advised to upgrade to this updated package, which adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/conman |
9.2. Using Add/Remove Software | 9.2. Using Add/Remove Software To find and install a new package, on the GNOME panel click on System Administration Add/Remove Software , or run the gpk-application command at the shell prompt. Figure 9.4. PackageKit's Add/Remove Software window 9.2.1. Refreshing Software Sources (Yum Repositories) PackageKit refers to Yum repositories as software sources. It obtains all packages from enabled software sources. You can view the list of all configured and unfiltered (see below) Yum repositories by opening Add/Remove Software and clicking System Software sources . The Software Sources dialog shows the repository name, as written on the name= <My Repository Name> field of all [ repository ] sections in the /etc/yum.conf configuration file, and in all repository .repo files in the /etc/yum.repos.d/ directory. Entries which are checked in the Enabled column indicate that the corresponding repository will be used to locate packages to satisfy all update and installation requests (including dependency resolution). You can enable or disable any of the listed Yum repositories by selecting or clearing the check box. Note that doing so causes PolicyKit to prompt you for superuser authentication. The Enabled column corresponds to the enabled= <1 or 0> field in [ repository ] sections. When you click the check box, PackageKit inserts the enabled= <1 or 0> line into the correct [ repository ] section if it does not exist, or changes the value if it does. This means that enabling or disabling a repository through the Software Sources window causes that change to persist after closing the window or rebooting the system. Note that it is not possible to add or remove Yum repositories through PackageKit. Note Checking the box at the bottom of the Software Sources window causes PackageKit to display source RPM, testing and debuginfo repositories as well. This box is unchecked by default. After making a change to the available Yum repositories, click on System Refresh package lists to make sure your package list is up-to-date. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-using_add_remove_software |
B.3. Using KVM Virtualization on ARM Systems | B.3. Using KVM Virtualization on ARM Systems Important KVM virtualization is provided in Red Hat Enterprise Linux 7.5 and later for the 64-bit ARM architecture. As such, KVM virtualization on ARM systems is not supported by Red Hat, not intended for use in a production environment, and may not address known security vulnerabilities. In addition, because KVM virtualization on ARM is still in rapid development, the information below is not guaranteed to be accurate or complete. Installation To use install virtualization on Red Hat Enterprise Linux 7.5 for ARM: Install the host system from the bootable image on the Customer Portal . After the system is installed, install the virtualization stack on the system by using the following command: Make sure you have the Optional channel enabled for the installation to succeed. For more information, see Adding the Optional and Supplementary Repositories . Architecture Specifics KVM virtualization on Red Hat Enterprise Linux 7.5 for the 64-bit ARM architecture differs from KVM on AMD64 and Intel 64 systems in the following: PXE booting is only supported with the virtio-net-device and virtio-net-pci network interface controllers (NICs). In addition, the built-in VirtioNetDxe driver of the ARM Architecture Virtual Machine Firmware (AAVMF) needs to be used for PXE booting. Note that iPXE option ROMs are not supported. Only up to 123 virtual CPUs (vCPUs) can be allocated to a single guest. | [
"yum install qemu-kvm-ma libvirt libvirt-client virt-install AAVMF"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/appe-KVM_on_ARM |
Chapter 2. New features and enhancements | Chapter 2. New features and enhancements 2.1. Automatic podman login into external container registries Starting from this release, podman login is performed automatically during workspace startup for all container registries configured in the User Preferences . Note For Red Hat OpenShift internal container registry image-registry.openshift-image-registry.svc:5000 , podman login is performed automatically. No manual configuration is required. Additional resources CRW-6523 2.2. Dashboard should switch to the tab with already running workspace When you open a workspace from the User Dashboard, and a browser tab corresponding to the same workspace already exists, the switch to the browser tab happens automatically starting from this release. Previously a new browser tab was created whenever you tried opening a workspace from the User Dashboard. Additional resources CRW-6804 2.3. "Restart Workspace from Local Devfile" command should be more informative when devfile is not valid Starting from this release, if the 'Restart Workspace from Local Devfile' command is failing due to an invalid devfile, the error notification message is more informative and contains the exact reason for the failure. Additional resources CRW-6805 2.4. Allow defining annotations for all pods in the Cloud Development Environment With this release, you can define annotations for all Cloud Development Environment (CDE) pods using a dedicated CustomResource field: apiVersion: org.eclipse.che/v2 kind: CheCluster spec: devEnvironments: workspacesPodAnnotations: cluster-autoscaler.kubernetes.io/safe-to-evict: false Additional resources CRW-6811 2.5. Configuring custom editor definitions using a config map Previously, you could only configure custom editor definitions by modifying and rebuilding the Plugin Registry . Starting from this release, you can configure them by creating a dedicated ConfigMap . Additional resources CRW-6812 2.6. Enabling fuse-overlayfs for all workspaces Starting from this release, you can enable fuse-overlayfs for all CDEs. Learn more about this feature in the official documentation . Additional resources CRW-6813 2.7. Meaningful dashboard warnings for namespace provisioning failures when auto-provisioning is disabled and the Advanced Authorization is enabled With this release, the user experience during failures related to pre-configured Advanced Authorization is improved. When your access is denied, you will see a clear error message when accessing the User Dashboard. Learn more about Advanced Authorization in the official documentation . Additional resources CRW-6814 2.8. Always refresh OAuth tokens during workspace startup A new experimental feature that forces a refresh of the OAuth access token during workspace startup has been added in this release. Learn more about this feature in the official documentation . Additional resources CRW-6815 2.9. Devfile 2.3.0 support With this release, the new 2.3.0 schemaVersion of the devfile is supported for the CDE definition: schemaVersion: 2.3.0 metadata: generateName: quarkus-api-example attributes: controller.devfile.io/storage-type: ephemeral components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest env: - name: QUARKUS_HTTP_HOST value: 0.0.0.0 ... More details about version 2.3.0 are available in the official documentation . Additional resources CRW-6816 | [
"apiVersion: org.eclipse.che/v2 kind: CheCluster spec: devEnvironments: workspacesPodAnnotations: cluster-autoscaler.kubernetes.io/safe-to-evict: false",
"schemaVersion: 2.3.0 metadata: generateName: quarkus-api-example attributes: controller.devfile.io/storage-type: ephemeral components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest env: - name: QUARKUS_HTTP_HOST value: 0.0.0.0"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/release_notes_and_known_issues/new-features |
7.123. libtdb | 7.123. libtdb 7.123.1. RHBA-2013:0353 - libtdb bug fix and enhancement update Updated libtdb packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6 The libtdb packages provide a library that implements the Trivial Database (TDB). TDB is a simple hashed database that uses internal locking to allow multiple simultaneous writers and readers. Note The libtdb packages have been upgraded to upstream version 1.2.10, which provides a number of bug fixes and enhancements over the version. These updated libtdb packages are compliant with requirements of Samba 4. (BZ# 766334 ) All users of libtdb are advised to upgrade to these updated packages, which fix these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libtdb |
5.243. pidgin | 5.243. pidgin 5.243.1. RHSA-2012:1102 - Moderate: pidgin security update Updated pidgin packages that fix three security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Pidgin is an instant messaging program which can log in to multiple accounts on multiple instant messaging networks simultaneously. Security Fixes CVE-2012-1178 A flaw was found in the way the Pidgin MSN protocol plug-in processed text that was not encoded in UTF-8. A remote attacker could use this flaw to crash Pidgin by sending a specially-crafted MSN message. CVE-2012-2318 An input validation flaw was found in the way the Pidgin MSN protocol plug-in handled MSN notification messages. A malicious server or a remote attacker could use this flaw to crash Pidgin by sending a specially-crafted MSN notification message. CVE-2012-3374 A buffer overflow flaw was found in the Pidgin MXit protocol plug-in. A remote attacker could use this flaw to crash Pidgin by sending a MXit message containing specially-crafted emoticon tags. Red Hat would like to thank the Pidgin project for reporting the CVE-2012-3374 issue. Upstream acknowledges Ulf Harnhammar as the original reporter of CVE-2012-3374. All Pidgin users should upgrade to these updated packages, which contain backported patches to resolve these issues. Pidgin must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/pidgin |
Chapter 12. Invalid score detection | Chapter 12. Invalid score detection If you use the environmentMode class and specify the value as FULL_ASSERT or FAST_ASSERT , the environment mode detects score corruption in the incremental score calculation. However, doing this will not verify that your score calculator implements your score constraints the way that your business wants. For example, one constraint might consistently match the wrong pattern. To verify the constraints against an independent implementation, configure an assertionScoreDirectorFactory class: <environmentMode>FAST_ASSERT</environmentMode> ... <scoreDirectorFactory> <constraintProviderClass>org.optaplanner.examples.nqueens.optional.score.NQueensConstraintProvider</constraintProviderClass> <assertionScoreDirectorFactory> <easyScoreCalculatorClass>org.optaplanner.examples.nqueens.optional.score.NQueensEasyScoreCalculator</easyScoreCalculatorClass> </assertionScoreDirectorFactory> </scoreDirectorFactory> In this example, the NQueensConstraintProvider implementation is validated by the EasyScoreCalculator . Note This technique works well to isolate score corruption, but to verify that the constraint implements the real business needs, a unit test with a ConstraintVerifier is usually better. | [
"<environmentMode>FAST_ASSERT</environmentMode> <scoreDirectorFactory> <constraintProviderClass>org.optaplanner.examples.nqueens.optional.score.NQueensConstraintProvider</constraintProviderClass> <assertionScoreDirectorFactory> <easyScoreCalculatorClass>org.optaplanner.examples.nqueens.optional.score.NQueensEasyScoreCalculator</easyScoreCalculatorClass> </assertionScoreDirectorFactory> </scoreDirectorFactory>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/invalid-score-detection-con_score-calculation |
2.2. Ticket Spinlocks | 2.2. Ticket Spinlocks A key part of any system design is ensuring that one process does not alter memory used by another process. Uncontrolled data change in memory can result in data corruption and system crashes. To prevent this, the operating system allows a process to lock a piece of memory, perform an operation, then unlock or "free" the memory. One common implementation of memory locking is through spin locks , which allow a process to keep checking to see if a lock is available and take the lock as soon as it becomes available. If there are multiple processes competing for the same lock, the first one to request the lock after it has been freed gets it. When all processes have the same access to memory, this approach is "fair" and works quite well. Unfortunately, on a NUMA system, not all processes have equal access to the locks. Processes on the same NUMA node as the lock have an unfair advantage in obtaining the lock. Processes on remote NUMA nodes experience lock starvation and degraded performance. To address this, Red Hat Enterprise Linux implemented ticket spinlocks . This feature adds a reservation queue mechanism to the lock, allowing all processes to take a lock in the order that they requested it. This eliminates timing problems and unfair advantages in lock requests. While a ticket spinlock has slightly more overhead than an ordinary spinlock, it scales better and provides better performance on NUMA systems. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-ticket-spinlocks |
Chapter 26. Troubleshooting upgrade issues | Chapter 26. Troubleshooting upgrade issues If you experience any issues with during the upgrade process, refer to the advice in this section. 26.1. Correcting environment files If you have made a mistake with any parameters in any custom environment files, you can correct the environment file and run the openstack overcloud upgrade prepare command at any time during the upgrade. This command uploads a new version of your overcloud plan to director, which will generate a new set of config-download playbooks. This example contains a repository name mistake in the upgrades-environment.yaml file: This mistake causes an issue during the Leapp upgrade for the Controller node. To rectify this issue, correct the mistake and run the openstack overcloud upgrade prepare command. Procedure Correct the mistake in the file: Run the upgrade preparation command with the corrected file: Wait until the overcloud stack update completes. Continue with the upgrade operation step that failed. | [
"parameter_defaults: UpgradeLeappEnabled: true UpgradeLeappCommandOptions: \"--enablerepo rhel-7-for-x86_64-baseos-eus-rpms --enablerepo rhel-8-for-x86_64-appstream-eus-rpms --enablerepo fast-datapath-for-rhel-8-x86_64-rpms\" CephAnsibleRepo: rhceph-4-tools-for-rhel-8-x86_64-rpms",
"parameter_defaults: UpgradeLeappEnabled: true UpgradeLeappCommandOptions: \"--enablerepo rhel-8-for-x86_64-baseos-eus-rpms --enablerepo rhel-8-for-x86_64-appstream-eus-rpms --enablerepo fast-datapath-for-rhel-8-x86_64-rpms\" CephAnsibleRepo: rhceph-4-tools-for-rhel-8-x86_64-rpms",
"openstack overcloud upgrade prepare --stack STACK NAME --templates -e ENVIRONMENT FILE ... -e /home/stack/templates/upgrades-environment.yaml ..."
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/troubleshooting-upgrade-issues |
Performing security operations | Performing security operations Red Hat OpenStack Services on OpenShift 18.0 Operating security services in a Red Hat OpenStack Services on OpenShift environment OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/performing_security_operations/index |
8.4. Red Hat Enterprise Linux-Specific Information | 8.4. Red Hat Enterprise Linux-Specific Information There is little about the general topic of disasters and disaster recovery that has a direct bearing on any specific operating system. After all, the computers in a flooded data center will be inoperative whether they run Red Hat Enterprise Linux or some other operating system. However, there are parts of Red Hat Enterprise Linux that relate to certain specific aspects of disaster recovery; these are discussed in this section. 8.4.1. Software Support As a software vendor, Red Hat, Inc does have a number of support offerings for its products, including Red Hat Enterprise Linux. You are using the most basic support tool right now by reading this manual. Documentation for Red Hat Enterprise Linux is available on the Red Hat Enterprise Linux Documentation CD (which can also be installed on your system for fast access), in printed form, and on the Red Hat website at http://www.redhat.com/docs/ . Self support options are available via the many mailing lists hosted by Red Hat (available at https://www.redhat.com/mailman/listinfo ). These mailing lists take advantage of the combined knowledge of Red Hat's user community; in addition, many lists are monitored by Red Hat personnel, who contribute as time permits. Other resources are available from Red Hat's main support page at http://www.redhat.com/apps/support/ . More comprehensive support options exist; information on them can be found on the Red Hat website. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-disaster-rhlspec |
Chapter 14. Red Hat build of Keycloak authorization client | Chapter 14. Red Hat build of Keycloak authorization client Depending on your requirements, a resource server should be able to manage resources remotely or even check for permissions programmatically. If you are using Java, you can access the Red Hat build of Keycloak Authorization Services using the Authorization Client API. It is targeted for resource servers that want to access the different endpoints provided by the server such as the Token Endpoint, Resource, and Permission management endpoints. 14.1. Maven dependency <dependencies> <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-authz-client</artifactId> <version>999.0.0-SNAPSHOT</version> </dependency> </dependencies> 14.2. Configuration The client configuration is defined in a keycloak.json file as follows: { "realm": "hello-world-authz", "auth-server-url" : "http://localhost:8080", "resource" : "hello-world-authz-service", "credentials": { "secret": "secret" } } realm (required) The name of the realm. auth-server-url (required) The base URL of the Red Hat build of Keycloak server. All other Red Hat build of Keycloak pages and REST service endpoints are derived from this. It is usually in the form https://host:port . resource (required) The client-id of the application. Each application has a client-id that is used to identify the application. credentials (required) Specifies the credentials of the application. This is an object notation where the key is the credential type and the value is the value of the credential type. The details are in the dedicated section . The configuration file is usually located in your application's classpath, the default location from where the client is going to try to find a keycloak.json file. 14.3. Creating the authorization client Considering you have a keycloak.json file in your classpath, you can create a new AuthzClient instance as follows: // create a new instance based on the configuration defined in a keycloak.json located in your classpath AuthzClient authzClient = AuthzClient.create(); 14.4. Obtaining user entitlements Here is an example illustrating how to obtain user entitlements: // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // send the entitlement request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization("alice", "alice").authorize(request); String rpt = response.getToken(); System.out.println("You got an RPT: " + rpt); // now you can use the RPT to access protected resources on the resource server Here is an example illustrating how to obtain user entitlements for a set of one or more resources: // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // add permissions to the request based on the resources and scopes you want to check access request.addPermission("Default Resource"); // send the entitlement request to the server in order to // obtain an RPT with permissions for a single resource AuthorizationResponse response = authzClient.authorization("alice", "alice").authorize(request); String rpt = response.getToken(); System.out.println("You got an RPT: " + rpt); // now you can use the RPT to access protected resources on the resource server 14.5. Creating a resource using the protection API // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create a new resource representation with the information we want ResourceRepresentation newResource = new ResourceRepresentation(); newResource.setName("New Resource"); newResource.setType("urn:hello-world-authz:resources:example"); newResource.addScope(new ScopeRepresentation("urn:hello-world-authz:scopes:view")); ProtectedResource resourceClient = authzClient.protection().resource(); ResourceRepresentation existingResource = resourceClient.findByName(newResource.getName()); if (existingResource != null) { resourceClient.delete(existingResource.getId()); } // create the resource on the server ResourceRepresentation response = resourceClient.create(newResource); String resourceId = response.getId(); // query the resource using its newly generated id ResourceRepresentation resource = resourceClient.findById(resourceId); System.out.println(resource); 14.6. Introspecting an RPT // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // send the authorization request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization("alice", "alice").authorize(); String rpt = response.getToken(); // introspect the token TokenIntrospectionResponse requestingPartyToken = authzClient.protection().introspectRequestingPartyToken(rpt); System.out.println("Token status is: " + requestingPartyToken.getActive()); System.out.println("Permissions granted by the server: "); for (Permission granted : requestingPartyToken.getPermissions()) { System.out.println(granted); } 14.7. Client authentication When an authorization client needs to send a backchannel request, it needs to authenticate against the Red Hat build of Keycloak server. By default, there are three ways to authenticate the client: client ID and client secret, client authentication with signed JWT, or client authentication with signed JWT using client secret. 14.7.1. Client ID and Client Secret This is the traditional method described in the OAuth2 specification. The client has a secret, which needs to be known to both the client and the Red Hat build of Keycloak server. You can generate the secret for a particular client in the Red Hat build of Keycloak Admin Console, and then paste this secret into the keycloak.json file on the application side: "credentials": { "secret": "19666a4f-32dd-4049-b082-684c74115f28" } 14.7.2. Client authentication with Signed JWT This is based on the RFC7523 specification. It works this way: The client must have the private key and certificate. For authorization client, this is available through the traditional keystore file, which is either available on the client application's classpath or somewhere on the file system. During authentication, the client generates a JWT token and signs it with its private key and sends it to Red Hat build of Keycloak in the particular request in the client_assertion parameter. Red Hat build of Keycloak must have the public key or certificate of the client so that it can verify the signature on JWT. In Red Hat build of Keycloak, you configure client credentials for your client. First, you choose Signed JWT as the method of authenticating your client in the tab Credentials in the Admin Console. Then you can choose one of these methods in the Keys tab: Configure the JWKS URL where Red Hat build of Keycloak can download the client's public keys. This option is the most flexible, since the client can rotate its keys anytime and Red Hat build of Keycloak always downloads new keys as needed without changing the configuration. In other words, Red Hat build of Keycloak downloads new keys when it sees the token signed by an unknown kid (Key ID). However, you will need to care of exposing the public key somewhere in JWKS format to be available to the server. Upload the client's public key or certificate, either in PEM format, in JWK format, or from the keystore. With this option, the public key is hardcoded and must be changed when the client generates a new key pair. You can even generate your own keystore from the Red Hat build of Keycloak Admin Console if you do not have your own keystore available. This option is the easiest when using authorization client. To set up for this method, you need to code something such as the following in your keycloak.json file: "credentials": { "jwt": { "client-keystore-file": "classpath:keystore-client.jks", "client-keystore-type": "JKS", "client-keystore-password": "storepass", "client-key-password": "keypass", "client-key-alias": "clientkey", "token-expiration": 10 } } With this configuration, the keystore file keystore-client.jks must be available on classpath of the application, which uses authorization client. If you do not use the prefix classpath: you can point to any file on the file system where the client application is running. 14.7.3. Client authentication with Signed JWT using client secret This is the same as Client Authentication with Signed JWT except for using the client secret instead of the private key and certificate. The client has a secret, which needs to be known to both the application using authorization client and the Red Hat build of Keycloak server. You choose Signed JWT with Client Secret as the method of authenticating your client in the Credentials tab in the Admin Console, and then paste this secret into the keycloak.json file on the application side: "credentials": { "secret-jwt": { "secret": "19666a4f-32dd-4049-b082-684c74115f28", "algorithm": "HS512" } } The "algorithm" field specifies the algorithm for the Signed JWT using Client Secret. It needs to be one of the following values : HS256, HS384, and HS512. For details, see JSON Web Algorithms (JWA) . This "algorithm" field is optional; HS256 is applied automatically if the "algorithm" field does not exist on the keycloak.json file. 14.7.4. Add your own client authentication method You can add your own client authentication method as well. You will need to implement both client-side and server-side providers. For more details see the Authentication SPI section in Server Developer Guide . | [
"<dependencies> <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-authz-client</artifactId> <version>999.0.0-SNAPSHOT</version> </dependency> </dependencies>",
"{ \"realm\": \"hello-world-authz\", \"auth-server-url\" : \"http://localhost:8080\", \"resource\" : \"hello-world-authz-service\", \"credentials\": { \"secret\": \"secret\" } }",
"// create a new instance based on the configuration defined in a keycloak.json located in your classpath AuthzClient authzClient = AuthzClient.create();",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // send the entitlement request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(request); String rpt = response.getToken(); System.out.println(\"You got an RPT: \" + rpt); // now you can use the RPT to access protected resources on the resource server",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // add permissions to the request based on the resources and scopes you want to check access request.addPermission(\"Default Resource\"); // send the entitlement request to the server in order to // obtain an RPT with permissions for a single resource AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(request); String rpt = response.getToken(); System.out.println(\"You got an RPT: \" + rpt); // now you can use the RPT to access protected resources on the resource server",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create a new resource representation with the information we want ResourceRepresentation newResource = new ResourceRepresentation(); newResource.setName(\"New Resource\"); newResource.setType(\"urn:hello-world-authz:resources:example\"); newResource.addScope(new ScopeRepresentation(\"urn:hello-world-authz:scopes:view\")); ProtectedResource resourceClient = authzClient.protection().resource(); ResourceRepresentation existingResource = resourceClient.findByName(newResource.getName()); if (existingResource != null) { resourceClient.delete(existingResource.getId()); } // create the resource on the server ResourceRepresentation response = resourceClient.create(newResource); String resourceId = response.getId(); // query the resource using its newly generated id ResourceRepresentation resource = resourceClient.findById(resourceId); System.out.println(resource);",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // send the authorization request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(); String rpt = response.getToken(); // introspect the token TokenIntrospectionResponse requestingPartyToken = authzClient.protection().introspectRequestingPartyToken(rpt); System.out.println(\"Token status is: \" + requestingPartyToken.getActive()); System.out.println(\"Permissions granted by the server: \"); for (Permission granted : requestingPartyToken.getPermissions()) { System.out.println(granted); }",
"\"credentials\": { \"secret\": \"19666a4f-32dd-4049-b082-684c74115f28\" }",
"\"credentials\": { \"jwt\": { \"client-keystore-file\": \"classpath:keystore-client.jks\", \"client-keystore-type\": \"JKS\", \"client-keystore-password\": \"storepass\", \"client-key-password\": \"keypass\", \"client-key-alias\": \"clientkey\", \"token-expiration\": 10 } }",
"\"credentials\": { \"secret-jwt\": { \"secret\": \"19666a4f-32dd-4049-b082-684c74115f28\", \"algorithm\": \"HS512\" } }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/securing_applications_and_services_guide/authz-client- |
Chapter 60. Networking | Chapter 60. Networking Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 It is impossible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5 signed certificates. To work around this problem, copy the wpa_supplicant.service file from the /usr/lib/systemd/system/ directory to the /etc/systemd/system/ directory and add the following line to the Service section of the file: Then run the systemctl daemon-reload command as root to reload the service file. Important: Note that MD5 certificates are highly insecure and Red Hat does not recommend using them. (BZ#1062656) freeradius might fail when upgrading from RHEL 7.3 A new configuration property, correct_escapes , in the /etc/raddb/radiusd.conf file was introduced in the freeradius version distributed since RHEL 7.4. When an administrator sets correct_escapes to true , the new regular expression syntax for backslash escaping is expected. If correct_escapes is set to false , the old syntax is expected where backslashes are also escaped. For backward compatibility reasons, false is the default value. When upgrading, configuration files in the /etc/raddb/ directory are overwritten unless modified by the administrator, so the value of correct_escapes might not always correspond to which type of syntax is used in all the configuration files. As a consequence, authentication with freeradius might fail. To prevent the problem from occurring, after upgrading from freeradius version 3.0.4 (distributed with RHEL 7.3) and earlier, make sure all configuration files in the /etc/raddb/ directory use the new escaping syntax (no double backslash characters can be found) and that the value of correct_escapes in /etc/raddb/radiusd.conf is set to true . For more information and examples, see the solution at https://access.redhat.com/solutions/3241961 . (BZ#1489758) | [
"Environment=OPENSSL_ENABLE_MD5_VERIFY=1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/known_issues_networking |
Chapter 4. Installation configuration parameters for IBM Power | Chapter 4. Installation configuration parameters for IBM Power Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. 4.1. Available installation configuration parameters for IBM Power The following tables specify the required, optional, and IBM Power-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 4.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 4.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 4.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 4.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 4.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 4.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_power/installation-config-parameters-ibm-power |
3.5. Growing a GFS2 File System | 3.5. Growing a GFS2 File System The gfs2_grow command is used to expand a GFS2 file system after the device where the file system resides has been expanded. Running the gfs2_grow command on an existing GFS2 file system fills all spare space between the current end of the file system and the end of the device with a newly initialized GFS2 file system extension. When the fill operation is completed, the resource index for the file system is updated. All nodes in the cluster can then use the extra storage space that has been added. The gfs2_grow command must be run on a mounted file system, but only needs to be run on one node in a cluster. All the other nodes sense that the expansion has occurred and automatically start using the new space. Note Once you have created a GFS2 file system with the mkfs.gfs2 command, you cannot decrease the size of the file system. Usage MountPoint Specifies the GFS2 file system to which the actions apply. Comments Before running the gfs2_grow command: Back up important data on the file system. Determine the volume that is used by the file system to be expanded by running the df MountPoint command. Expand the underlying cluster volume with LVM. For information on administering LVM volumes, see Logical Volume Manager Administration . After running the gfs2_grow command, run the df command to check that the new space is now available in the file system. Examples In this example, the file system on the /mygfs2fs directory is expanded. Complete Usage MountPoint Specifies the directory where the GFS2 file system is mounted. Device Specifies the device node of the file system. Table 3.3, "GFS2-specific Options Available While Expanding A File System" describes the GFS2-specific options that can be used while expanding a GFS2 file system. Table 3.3. GFS2-specific Options Available While Expanding A File System Option Description -h Help. Displays a short usage message. -q Quiet. Turns down the verbosity level. -r Megabytes Specifies the size of the new resource group. The default size is 256 megabytes. -T Test. Do all calculations, but do not write any data to the disk and do not expand the file system. -V Displays command version information. | [
"gfs2_grow MountPoint",
"gfs2_grow /mygfs2fs FS: Mount Point: /mygfs2fs FS: Device: /dev/mapper/gfs2testvg-gfs2testlv FS: Size: 524288 (0x80000) FS: RG size: 65533 (0xfffd) DEV: Size: 655360 (0xa0000) The file system grew by 512MB. gfs2_grow complete.",
"gfs2_grow [ Options ] { MountPoint | Device } [ MountPoint | Device ]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/s1-manage-growfs |
Chapter 4. Summary | Chapter 4. Summary With Red Hat OpenShift Platform 4, Red Hat OpenStack Platform 16.2, and Red Hat Ceph Storage 4, organizations have access to a comprehensive and prescriptive installation experience for their on-premises container infrastructure. This Red Hat Tested Solution showcases a prescriptive and pre-validated private cloud solution from Red Hat that provides rapid provisioning and lifecycle management of containerized infrastructure, virtual machines (VMs), and associated application and infrastructure services. The Red Hat Quality Engineering teams (QE) have tested and validated the implementation as presented in this solution. Organizations seeking to operationalize this solution quickly can be assured that all options represented are both fully tested as well as fully supported by Red Hat. Red Hat OpenShift Container Platform, Red Hat OpenStack Platform, and Red Hat Ceph Storage are the key architectural components of this solution. This integration is a key component to hybrid and multi-cloud solutions with OpenShift Container Platform serving as the common container and platform across a variety of deployment footprints. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/reference_architecture_for_deploying_red_hat_openshift_container_platform_on_red_hat_openstack_platform/summary |
10.2. Interacting with NetworkManager | 10.2. Interacting with NetworkManager Users do not interact with the NetworkManager system service directly. Instead, you can perform network configuration tasks via NetworkManager 's Notification Area applet. The applet has multiple states that serve as visual indicators for the type of connection you are currently using. Hover the pointer over the applet icon for tooltip information on the current connection state. Figure 10.1. NetworkManager applet states If you do not see the NetworkManager applet in the GNOME panel, and assuming that the NetworkManager package is installed on your system, you can start the applet by running the following command as a normal user (not root ): After running this command, the applet appears in your Notification Area. You can ensure that the applet runs each time you log in by clicking System Preferences Startup Applications to open the Startup Applications Preferences window. Then, select the Startup Programs tab and check the box to NetworkManager . 10.2.1. Connecting to a Network When you left-click on the applet icon, you are presented with: a list of categorized networks you are currently connected to (such as Wired and Wireless ); a list of all Available Networks that NetworkManager has detected; options for connecting to any configured Virtual Private Networks (VPNs); and, options for connecting to hidden or new wireless networks. If you are connected to a network, its name is presented in bold typeface under its network type, such as Wired or Wireless . When many networks are available, such as wireless access points, the More networks expandable menu entry appears. Figure 10.2. The NetworkManager applet's left-click menu, showing all available and connected-to networks | [
"~]USD nm-applet &"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Interacting_with_NetworkManager |
Chapter 29. KafkaJmxOptions schema reference | Chapter 29. KafkaJmxOptions schema reference Used in: KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , ZookeeperClusterSpec Full list of KafkaJmxOptions schema properties Configures JMX connection options. Get JMX metrics from Kafka brokers, ZooKeeper nodes, Kafka Connect, and MirrorMaker 2. by connecting to port 9999. Use the jmxOptions property to configure a password-protected or an unprotected JMX port. Using password protection prevents unauthorized pods from accessing the port. You can then obtain metrics about the component. For example, for each Kafka broker you can obtain bytes-per-second usage data from clients, or the request rate of the network of the broker. To enable security for the JMX port, set the type parameter in the authentication field to password . Example password-protected JMX configuration for Kafka brokers and ZooKeeper nodes apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: authentication: type: "password" # ... zookeeper: # ... jmxOptions: authentication: type: "password" #... You can then deploy a pod into a cluster and obtain JMX metrics using the headless service by specifying which broker you want to address. For example, to get JMX metrics from broker 0 you specify: " CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers" CLUSTER-NAME -kafka-0 is name of the broker pod, and CLUSTER-NAME -kafka-brokers is the name of the headless service to return the IPs of the broker pods. If the JMX port is secured, you can get the username and password by referencing them from the JMX Secret in the deployment of your pod. For an unprotected JMX port, use an empty object {} to open the JMX port on the headless service. You deploy a pod and obtain metrics in the same way as for the protected port, but in this case any pod can read from the JMX port. Example open port JMX configuration for Kafka brokers and ZooKeeper nodes apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: {} # ... zookeeper: # ... jmxOptions: {} # ... Additional resources For more information on the Kafka component metrics exposed using JMX, see the Apache Kafka documentation . 29.1. KafkaJmxOptions schema properties Property Property type Description authentication KafkaJmxAuthenticationPassword Authentication configuration for connecting to the JMX port. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: authentication: type: \"password\" # zookeeper: # jmxOptions: authentication: type: \"password\" #",
"\" CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: {} # zookeeper: # jmxOptions: {} #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkajmxoptions-reference |
10.9. Choose an Installation Boot Method | 10.9. Choose an Installation Boot Method You can use several methods to boot the Red Hat Enterprise Linux 7 installation program. The method you choose depends upon your installation media. Note Installation media must remain mounted throughout installation, including during execution of the %post section of a kickstart file. Full installation DVD or USB drive You can create bootable media from the full installation DVD ISO image. In this case, a single DVD or USB drive can be used to complete the entire installation - it will serve both as a boot device and as an installation source for installing software packages. See Chapter 3, Making Media for instructions on how to make a full installation DVD or USB drive. Minimal boot CD, DVD or USB Flash Drive A minimal boot CD, DVD or USB flash drive is created using a small ISO image, which only contains data necessary to boot the system and start the installation. If you use this boot media, you will need an additional installation source from which packages will be installed. See Chapter 3, Making Media for instructions on making boot CDs, DVDs and USB flash drives. PXE Server A preboot execution environment (PXE) server allows the installation program to boot over the network. After you boot the system, you complete the installation from a different installation source, such as a local hard drive or a location on a network. For more information on PXE servers, see Chapter 24, Preparing for a Network Installation . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-installation-planning-boot-method-ppc |
Chapter 2. Migrating Service Registry data | Chapter 2. Migrating Service Registry data Migrating data to Service Registry 2.x requires exporting all data from your existing 1.1 deployment and importing it into the new 2.x deployment. If you are using Service Registry as a schema registry for Kafka applications, data migration is critical because each Kafka message carries the global identifier for the schema stored in Service Registry. This identifier must be preserved during registry data migration. Service Registry 2.x provides an API to bulk import/export all data from your registry deployment, which guarantees that all identifiers are kept when importing data from your existing registry. The export API downloads a custom .zip file containing all the information for your artifacts. The import API accepts this .zip and loads all artifacts into the registry in a single batch. Service Registry 1.1 does not provide an import/export API. However, version 2.x provides an export tool compatible with Service Registry 1.1 to export a .zip , which you can import into your 2.x registry. This tool uses common existing APIs to export all content in the registry. However, it is less performant than the 2.x export API, and should only be used when exporting from a 1.1 registry. Prerequisites Running Service Registry instances of the 1.1 server you are exporting from and the 2.x server you are importing into. Download the Service Registry exportV1 tool from the Red Hat Customer Portal. This is a Java application that you can run on the command line. Procedure Export all the data from Service Registry 1.1 using the exportV1 tool. This generates a registry-export.zip file in your current directory: java -jar apicurio-registry-utils-exportV1-2.5.10.Final-redhat-00001.jar http://old-registry.my-company.com/api Import the .zip file into Service Registry 2.x using the import API: curl -X POST "http://new-registry.my-company.com/apis/registry/v2/admin/import" \ -H "Accept: application/json" -H "Content-Type: application/zip" \ --data-binary @registry-export.zip Check that all the artifacts have been imported into the new 2.x registry by running these commands and comparing the count field: curl "http://old-registry.my-company.com/api/search/artifacts" curl "http://new-registry.my-company.com/apis/registry/v2/search/artifacts" Additional resources For more details on the import/export REST API, see the Service Registry User Guide . For more details on the export tool for migrating from version 1.x to 2.x, see the Apicurio Registry export utility for 1.x versions . | [
"java -jar apicurio-registry-utils-exportV1-2.5.10.Final-redhat-00001.jar http://old-registry.my-company.com/api",
"curl -X POST \"http://new-registry.my-company.com/apis/registry/v2/admin/import\" -H \"Accept: application/json\" -H \"Content-Type: application/zip\" --data-binary @registry-export.zip",
"curl \"http://old-registry.my-company.com/api/search/artifacts\"",
"curl \"http://new-registry.my-company.com/apis/registry/v2/search/artifacts\""
] | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/migrating_service_registry_deployments/migrating-registry-data_service-registry |
Chapter 1. Using the Red Hat Quay API | Chapter 1. Using the Red Hat Quay API Red Hat Quay provides a full OAuth 2 , RESTful API. [OAuth 2] RESTful API provides the following benefits: It is available from endpoint /api/v1 endpoint of your Red Hat Quay host. For example, https://<quay-server.example.com>/api/v1 . It allows users to connect to endpoints through their browser to GET , POST , DELETE , and PUT Red Hat Quay settings by enabling the Swagger UI. It can be accessed by applications that make API calls and use OAuth tokens. It sends and receives data as JSON. The following section describes how to access the Red Hat Quay API so that it can be used with your deployment. 1.1. Accessing the Quay API from Quay.io If you don't have your own Red Hat Quay cluster running yet, you can explore the Red Hat Quay API available from Quay.io from your web browser: The API Explorer that appears shows Quay.io API endpoints. You will not see superuser API endpoints or endpoints for Red Hat Quay features that are not enabled on Quay.io (such as Repository Mirroring). From API Explorer, you can get, and sometimes change, information on: Billing, subscriptions, and plans Repository builds and build triggers Error messages and global messages Repository images, manifests, permissions, notifications, vulnerabilities, and image signing Usage logs Organizations, members and OAuth applications User and robot accounts and more... Select to open an endpoint to view the Model Schema for each part of the endpoint. Open an endpoint, enter any required parameters (such as a repository name or image), then select the Try it out! button to query or change settings associated with a Quay.io endpoint. 1.2. Creating a v1 OAuth access token OAuth access tokens are credentials that allow you to access protected resources in a secure manner. With Red Hat Quay, you must create an OAuth access token before you can access the API endpoints of your organization. Use the following procedure to create an OAuth access token. Prerequisites You have logged in to Red Hat Quay as an administrator. Procedure On the main page, select an Organization. In the navigation pane, select Applications . Click Create New Application and provide a new application name, then press Enter . On the OAuth Applications page, select the name of your application. Optional. Enter the following information: Application Name Homepage URL Description Avatar E-mail Redirect/Callback URL prefix In the navigation pane, select Generate Token . Check the boxes for the following options: Administer Organization Administer Repositories Create Repositories View all visible repositories Read/Write to any accessible repositories Super User Access Administer User Read User Information Click Generate Access Token . You are redirected to a new page. Review the permissions that you are allowing, then click Authorize Application . Confirm your decision by clicking Authorize Application . You are redirected to the Access Token page. Copy and save the access token. Important This is the only opportunity to copy and save the access token. It cannot be reobtained after leaving this page. 1.3. Creating an OCI referrers OAuth access token In some cases, you might want to create an OCI referrers OAuth access token. This token is used to list OCI referrers of a manifest under a repository. Procedure Update your config.yaml file to include the FEATURE_REFERRERS_API: true field. For example: # ... FEATURE_REFERRERS_API: true # ... Enter the following command to Base64 encode your credentials: USD echo -n '<username>:<password>' | base64 Example output abcdeWFkbWluOjE5ODlraWROZXQxIQ== Enter the following command to use the base64 encoded string and modify the URL endpoint to your Red Hat Quay server: USD curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq Example output { "token": "<example_secret> } 1.4. Reassigning an OAuth access token Organization administrators can assign OAuth API tokens to be created by other user's with specific permissions. This allows the audit logs to be reflected accurately when the token is used by a user that has no organization administrative permissions to create an OAuth API token. Note The following procedure only works on the current Red Hat Quay UI. It is not currently implemented in the Red Hat Quay v2 UI. Prerequisites You are logged in as a user with organization administrative privileges, which allows you to assign an OAuth API token. Note OAuth API tokens are used for authentication and not authorization. For example, the user that you are assigning the OAuth token to must have the Admin team role to use administrative API endpoints. For more information, see Managing access to repositories . Procedure Optional. If not already, update your Red Hat Quay config.yaml file to include the FEATURE_ASSIGN_OAUTH_TOKEN: true field: # ... FEATURE_ASSIGN_OAUTH_TOKEN: true # ... Optional. Restart your Red Hat Quay registry. Log in to your Red Hat Quay registry as an organization administrator. Click the name of the organization in which you created the OAuth token for. In the navigation pane, click Applications . Click the proper application name. In the navigation pane, click Generate Token . Click Assign another user and enter the name of the user that will take over the OAuth token. Check the boxes for the desired permissions that you want the new user to have. For example, if you only want the new user to be able to create repositories, click Create Repositories . Important Permission control is defined by the team role within an organization and must be configured regardless of the options selected here. For example, the user that you are assigning the OAuth token to must have the Admin team role to use administrative API endpoints. Solely checking the Super User Access box does not actually grant the user this permission. Superusers must be configured via the config.yaml file and the box must be checked here. Click Assign token . A popup box appears that confirms authorization with the following message and shows you the approved permissions: This will prompt user <username> to generate a token with the following permissions: repo:create Click Assign token in the popup box. You are redirected to a new page that displays the following message: Token assigned successfully Verification After reassigning an OAuth token, the assigned user must accept the token to receive the bearer token, which is required to use API endpoints. Request that the assigned user logs into the Red Hat Quay registry. After they have logged in, they must click their username under Users and Organizations . In the navigation pane, they must click External Logins And Applications . Under Authorized Applications , they must confirm the application by clicking Authorize Application . They are directed to a new page where they must reconfirm by clicking Authorize Application . They are redirected to a new page that reveals their bearer token. They must save this bearer token, as it cannot be viewed again. 1.5. Accessing your Quay API from a web browser By enabling Swagger, you can access the API for your own Red Hat Quay instance through a web browser. This URL exposes the Red Hat Quay API explorer via the Swagger UI and this URL: That way of accessing the API does not include superuser endpoints that are available on Red Hat Quay installations. Here is an example of accessing a Red Hat Quay API interface running on the local system by running the swagger-ui container image: With the swagger-ui container running, open your web browser to localhost port 8888 to view API endpoints via the swagger-ui container. To avoid errors in the log such as "API calls must be invoked with an X-Requested-With header if called from a browser," add the following line to the config.yaml on all nodes in the cluster and restart Red Hat Quay: 1.6. Accessing the Red Hat Quay API from the command line You can use the curl command to GET, PUT, POST, or DELETE settings via the API for your Red Hat Quay cluster. Replace <token> with the OAuth access token you created earlier to get or change settings in the following examples. | [
"https://docs.quay.io/api/swagger/",
"FEATURE_REFERRERS_API: true",
"echo -n '<username>:<password>' | base64",
"abcdeWFkbWluOjE5ODlraWROZXQxIQ==",
"curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq",
"{ \"token\": \"<example_secret> }",
"FEATURE_ASSIGN_OAUTH_TOKEN: true",
"This will prompt user <username> to generate a token with the following permissions: repo:create",
"Token assigned successfully",
"https://<yourquayhost>/api/v1/discovery.",
"export SERVER_HOSTNAME=<yourhostname> sudo podman run -p 8888:8080 -e API_URL=https://USDSERVER_HOSTNAME:8443/api/v1/discovery docker.io/swaggerapi/swagger-ui",
"BROWSER_API_CALLS_XHR_ONLY: false"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_api_guide/using-the-api |
5.223. openswan | 5.223. openswan 5.223.1. RHBA-2012:1305 - openswan bug fix update Updated openswan packages that fix a bug are now available for Red Hat Enterprise Linux 6. Openswan is a free implementation of IPsec (Internet Protocol Security) and IKE (Internet Key Exchange) for Linux. The openswan packages contain daemons and user-space tools for setting up Openswan. It supports the NETKEY/XFRM IPsec kernel stack that exists in the default Linux kernel. Openswan 2.6 and later also supports IKEv2 (Internet Key Exchange Protocol version 2), which is defined in RFC5996. Bug Fix BZ# 852454 When a tunnel was established between two IPsec hosts (say host1 and host2) utilizing DPD (Dead Peer Detection), and if host2 went offline while host1 continued to transmit data, host1 continually queued multiple phase 2 requests after the DPD action. When host2 came back online, the stack of pending phase 2 requests was established, leaving a new IPsec SA (Security Association), and a large group of extra SA's that consumed system resources and eventually expired. This update ensures that openswan has just a single pending phase 2 request during the time that host2 is down, and when host2 comes back up, only a single new IPsec SA is established, thus preventing this bug. All users of openswan are advised to upgrade to these updated packages, which fix this bug. 5.223.2. RHBA-2012:1069 - openswan bug fix update Updated openswan packages that fix two bugs are now available for Red Hat Enterprise Linux 6. Openswan is a free implementation of IPsec (internet Protocol Security) and IKE (Internet Key Exchange) for Linux. The openswan packages contain the daemons and user-space tools for setting up Openswan. It supports the NETKEY/XFRM IPsec kernel stack that exists in the default Linux kernel. Openswan 2.6 and later also supports IKEv2 (Internet Key Exchange Protocol Version 2), which is defined in RFC5996. Bug Fixes BZ# 834660 According to the RFC 5996 standard, reserved fields must be ignored on receipt irrespective of their value. Previously, however, the contents of the reserved fields was not being ignored on receipt for some payloads. Consequently, Openswan reported an error message and IKE negotiation failed. With this update, Openswan has been modified to ignore the reserved fields and IKE negotiation succeeds regardless of the reserved field value. BZ# 834662 When a connection was configured in transport mode, Openswan did not pass information about traffic selectors to the NETKEY/XFRM IPsec kernel stack during the setup of security associations (SAs). Consequently, the information was not available in the output of the "ip xfrm state" command. With this update, Openswan correctly passes the traffic selectors information to the kernel when SAs are setup in transport mode. All users of openswan are advised to upgrade to these updated packages, which fix these bugs. 5.223.3. RHBA-2012:0916 - openswan bug fix update Updated openswan packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Openswan is a free implementation of IPsec (Internet Protocol Security) and IKE (Internet Key Exchange) for Linux. The openswan package contains the daemons and user space tools for setting up Openswan. It supports the NETKEY/XFRM IPsec kernel stack that exists in the default Linux kernel. Openswan 2.6.x also supports IKEv2 ( RFC4306 ). Bug Fixes BZ# 768162 Previously, Openswan sometimes generated a KE payload that was 1 byte shorter than specified by the Diffie-Hellman algorithm. Consequently, IKE renegotiation failed at random intervals. An error message in the following format was logged: This update checks the length of the generated key and if it is shorter than required, leading zero bytes are added. BZ# 768442 Older versions of kernel required the output length of the HMAC hash function to be truncated to 96 bits, therefore Openswan previously worked with 96-bit truncation length when using the HMAC-SHA2-256 algorithm. However, newer kernels require the 128-bit HMAC truncation length, which is as per the RFC4868 specification. Consequently, this difference could cause incompatible SAs to be set on IKE endpoints due to one endpoint using 96-bit and the other 128-bit output length of the hash function. This update modifies the underlying code so that Openswan now complies with RFC4868 and adds support for the new kernel configuration parameter, sha2_truncbug. If the sha2_truncbug parameter is set to yes , Openswan now passes the correct key length to the kernel, which ensures interoperability between older and newer kernels. BZ# 771457 When processing an IKE_SA_INIT exchange and the RESERVED field of the IKE_SA_INIT request or response messages was modified, Openswan did not ignore the field as expected according to the IKEv2 RFC5996 specification. Consequently, IKE_SA_INIT messages with reserved fields set were processed as erroneous messages by Openswan and the IKE_SA_INIT exchange failed. With this update, Openswan has been modified to ignore reserved fields as expected and IKE_SA_INIT exchanges succeed in this scenario. BZ# 771460 When processing an IKE_AUTH exchange and the RESERVED field of the IKE_AUTH request or response messages was modified, Openswan did not ignore the field as expected according to the IKEv2 RFC5996 specification. Consequently, the IKE_AUTH messages were processed as erroneous messages by Openswan and the IKE_AUTH exchange failed. With this update, Openswan has been modified to ignore reserved fields as expected and IKE_AUTH exchanges succeed in this scenario. BZ# 771461 Openswan incorrectly processed traffic selector messages proposed by the responder (the endpoint responding to an initiated exchange) by failing to confine them to a subset of the initially proposed traffic selectors. As a consequence, Openswan set up CHILD security associations (SAs) incorrectly. With this update, Openswan reduces the set of traffic selectors correctly, and sets up IKE CHILD SAs accordingly. BZ# 771463 Previously, Openswan did not behave in accordance with the IKEv2 RFC5996 specification and ignored IKE_AUTH messages that contained an unrecognized " Notify " payload. This resulted in IKE SAs being set up successfully. With this update, Openswan processes any unrecognized Notify payload as an error and IKE SA setup fails as expected. BZ# 771464 When processing an INFORMATIONAL exchange, the responder previously did not send an INFORMATIONAL response message as expected in reaction to the INFORMATIONAL request message sent by the initiator. As a consequence, the INFORMATIONAL exchange failed. This update corrects Openswan so that the responder now sends an INFORMATIONAL response message after every INFORMATIONAL request message received, and the INFORMATIONAL exchange succeeds as expected in this scenario. BZ# 771465 When processing an INFORMATIONAL exchange with a Delete payload, the responder previously did not send an INFORMATIONAL response message as expected in reaction to the INFORMATIONAL request message sent by the initiator. As a consequence, the INFORMATIONAL exchange failed and the initiator did not delete IKE SAs. This updates corrects Openswan so that the responder now sends an INFORMATIONAL response message and the initiator deletes IKE SAs as expected in this scenario. BZ# 771466 When the responder received an INFORMATIONAL request with a " Delete " payload for a CHILD SA, Openswan did not process the request correctly and did not send the INFORMATIONAL response message to the initiator as expected according to the RFC5996 specification. Consequently, the responder was not aware of the request and only the initiator's CHILD SA was deleted. With this update, Openswan sends the response message as expected and the CHILD SA is deleted properly on both endpoints. BZ# 771467 Openswan did not ignore the minor version number of the IKE_SA_INIT request messages as required by the RFC5996 specification. Consequently, if the minor version number of the request was higher than the minor version number of the IKE protocol used by the receiving peer, Openswan processed the IKE_SA_INIT messages as erroneous and the IKE_SA_INIT exchange failed. With this update, Openswan has been modified to ignore the Minor Version fields of the IKE_SA_INIT requests as expected and the IKE_SA_INIT exchange succeeds in this scenario. BZ# 771470 The Openswan IKEv2 implementation did not correctly process an IKE_SA_INIT message containing an INVALID_KE_PAYLOAD " Notify " payload. With this fix, Openswan now sends the INVALID_KE_PAYLOAD notify message back to the peer so that IKE_SA_INIT can restart with the correct KE payload. BZ# 771472 Openswan incorrectly processed traffic selector messages proposed by the initiator (the endpoint which started an exchange) by failing to confine them to a subset of the initially proposed traffic selectors. As a consequence, Openswan set up CHILD SAs incorrectly. With this update, Openswan reduces the set of traffic selectors correctly, and sets up IKE CHILD SAs accordingly. BZ# 771473 Previously, Openswan did not respond to INFORMATIONAL requests with no payloads that are used for dead-peer detection. Consequently, the initiator considered the responder to be a dead peer and deleted the respective IKE SAs. This update modifies Openswan so that an empty INFORMATIONAL response message is now sent to the initiator as expected, and the initiator no longer incorrectly deletes IKE SAs in this scenario. BZ# 771475 When processing an INFORMATIONAL exchange and the RESERVED field of the INFORMATIONAL request or response messages was modified, Openswan did not ignore the field as expected according to the IKEv2 RFC5996 specification. Consequently, the INFORMATIONAL messages were processed as erroneous by Openswan, and the INFORMATIONAL exchange failed. With this update, Openswan has been modified to ignore reserved fields as expected and INFORMATIONAL exchanges succeed in this scenario. BZ# 795842 When the initiator received an INFORMATIONAL request with a " Delete " payload for an IKE SA, Openswan did not process the request correctly and did not send the INFORMATIONAL response message to the responder as expected according to the RFC5996 specification. Consequently, the initiator was not aware of the request and only the responder's IKE SA was deleted. With this update, Openswan sends the response message as expected and the IKE SA is deleted properly on both endpoints. BZ# 795850 IKEv2 requires each IKE message to have a sequence number for matching a request and response when re-transmitting the message during the IKE exchange. Previously, Openswan incremented sequence numbers incorrectly so that IKE messages were processed in the wrong order. As a consequence, any messages sent by the responder were not processed correctly and any subsequent exchange failed. This update modifies Openswan to increment sequence numbers in accordance with the RFC5996 specification so that IKE messages are matched correctly and exchanges succeed as expected in this scenario. Users of openswan should upgrade to this updated package, which fixes these bugs. 5.223.4. RHBA-2013:1161 - openswan bug fix update Updated openswan packages that fix one bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. Openswan is a free implementation of Internet Protocol Security (IPsec) and Internet Key Exchange (IKE). IPsec uses strong cryptography to provide both authentication and encryption services. These services allow you to build secure tunnels through untrusted networks. Bug Fix BZ# 983451 The openswan package for Internet Protocol Security (IPsec) contains two diagnostic commands, "ipsec barf" and "ipsec look", that can cause the iptables kernel modules for NAT and IP connection tracking to be loaded. On very busy systems, loading such kernel modules can result in severely degraded performance or lead to a crash when the kernel runs out of resources. With this update, the diagnostic commands do not cause loading of the NAT and IP connection tracking modules. This update does not affect systems that already use IP connection tracking or NAT as the iptables and ip6tables services will already have loaded these kernel modules. Users of openswan are advised to upgrade to these updated packages, which fix this bug. | [
"next payload type of ISAKMP Identification Payload has an unknown value:"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/openswan |
14.11.3. Stopping and Deleting Storage Pools | 14.11.3. Stopping and Deleting Storage Pools The pool-destroy pool-or-uuid stops a storage pool. Once stopped, libvirt will no longer manage the pool but the raw data contained in the pool is not changed, and can be later recovered with the pool-create command. The pool-delete pool-or-uuid destroys the resources used by the specified storage pool. It is important to note that this operation is non-recoverable and non-reversible. However, the pool structure will still exist after this command, ready to accept the creation of new storage volumes. The pool-undefine pool-or-uuid command undefines the configuration for an inactive pool. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-storage_pool_commands-stopping_and_deleting_storage_pools |
Providing Feedback on Red Hat Documentation | Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/deploying_red_hat_satellite_on_amazon_web_services/providing-feedback-on-red-hat-documentation_deploying-on-aws |
probe::netdev.register | probe::netdev.register Name probe::netdev.register - Called when the device is registered Synopsis netdev.register Values dev_name The device that is going to be registered | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netdev-register |
4.2. Finish Configuring the Diskless Environment | 4.2. Finish Configuring the Diskless Environment To use the graphical version of the Network Booting Tool , you must be running the X Window System, have root privileges, and have the system-config-netboot RPM package installed. To start the Network Booting Tool from the desktop, go to Applications (the main menu on the panel) => System Settings => Server Settings => Network Booting Service . Or, type the command system-config-netboot at a shell prompt (for example, in an XTerm or a GNOME terminal ). If starting the Network Booting Tool for the first time, select Diskless from the First Time Druid . Otherwise, select Configure => Diskless from the pull-down menu, and then click Add . A wizard appears to step you through the process: Click Forward on the first page. On the Diskless Identifier page, enter a Name and Description for the diskless environment. Click Forward . Enter the IP address or domain name of the NFS server configured in Section 4.1, "Configuring the NFS Server" as well as the directory exported as the diskless environment. Click Forward . The kernel versions installed in the diskless environment are listed. Select the kernel version to boot on the diskless system. Click Apply to finish the configuration. After clicking Apply , the diskless kernel and image file are created based on the kernel selected. They are copied to the PXE boot directory /tftpboot/linux-install/ <os-identifier> / . The directory snapshot/ is created in the same directory as the root/ directory (for example, /diskless/i386/RHEL4-AS/snapshot/ ) with a file called files in it. This file contains a list of files and directories that must be read/write for each diskless system. Do not modify this file. If additional entries must be added to the list, create a files.custom file in the same directory as the files file, and add each additional file or directory on a separate line. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Diskless_Environments-Finish_Configuring_the_Diskless_Environment |
Chapter 1. Ceph block devices and OpenStack | Chapter 1. Ceph block devices and OpenStack The Red Hat Enterprise Linux OpenStack Platform Director provides two methods for using Ceph as a backend for Glance, Cinder, Cinder Backup and Nova: OpenStack creates the Ceph storage cluster: OpenStack Director can create a Ceph storage cluster. This requires configuring templates for the Ceph OSDs. OpenStack handles the installation and configuration of Ceph hosts. With this scenario, OpenStack will install the Ceph monitors with the OpenStack controller hosts. OpenStack connects to an existing Ceph storage cluster: OpenStack Director, using Red Hat OpenStack Platform 9 and higher, can connect to a Ceph monitor and configure the Ceph storage cluster for use as a backend for OpenStack. The foregoing methods are the preferred methods for configuring Ceph as a backend for OpenStack, because they will handle much of the installation and configuration automatically. This document details the manual procedure for configuring Ceph, QEMU, libvirt and OpenStack to use Ceph as a backend. This document is intended for use for those who do not intend to use the RHEL OSP Director. Note A running Ceph storage cluster and at least one OpenStack host is required to use Ceph block devices as a backend for OpenStack. Three parts of OpenStack integrate with Ceph's block devices: Images: OpenStack Glance manages images for VMs. Images are immutable. OpenStack treats images as binary blobs and downloads them accordingly. Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using Cinder services. Ceph can serve as a black end for OpenStack Cinder and Cinder Backup. Guest Disks: Guest disks are guest operating system disks. By default, when booting a virtual machine, its disk appears as a file on the file system of the hypervisor, by default, under /var/lib/nova/instances/<uuid>/ directory. OpenStack Glance can store images in a Ceph block device, and can use Cinder to boot a virtual machine using a copy-on-write clone of an image. Important Ceph doesn't support QCOW2 for hosting a virtual machine disk. To boot virtual machines, either ephemeral backend or booting from a volume, the Glance image format must be RAW. OpenStack can use Ceph for images, volumes or guest disks virtual machines. There is no requirement for using all three. Additional Resources See the Red Hat OpenStack Platform documentation for additional details. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/block_device_to_openstack_guide/ceph-block-devices-and-openstack-rbd-osp |
Chapter 12. Live migration | Chapter 12. Live migration 12.1. Virtual machine live migration 12.1.1. About live migration Live migration is the process of moving a running virtual machine instance (VMI) to another node in the cluster without interrupting the virtual workload or access. If a VMI uses the LiveMigrate eviction strategy, it automatically migrates when the node that the VMI runs on is placed into maintenance mode. You can also manually start live migration by selecting a VMI to migrate. You can use live migration if the following conditions are met: Shared storage with ReadWriteMany (RWX) access mode. Sufficient RAM and network bandwidth. If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU. By default, live migration traffic is encrypted using Transport Layer Security (TLS). 12.1.2. Additional resources Migrating a virtual machine instance to another node Monitoring live migration Live migration limiting Customizing the storage profile 12.2. Live migration limits and timeouts Apply live migration limits and timeouts so that migration processes do not overwhelm the cluster. Configure these settings by editing the HyperConverged custom resource (CR). 12.2.1. Configuring live migration limits and timeouts Configure live migration limits and timeouts for the cluster by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace. Procedure Edit the HyperConverged CR and add the necessary live migration parameters. USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 1 In this example, the spec.liveMigrationConfig array contains the default values for each field. Note You can restore the default value for any spec.liveMigrationConfig field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value> to restore the default progressTimeout: 150 . 12.2.2. Cluster-wide live migration limits and timeouts Table 12.1. Migration parameters Parameter Description Default parallelMigrationsPerCluster Number of migrations running in parallel in the cluster. 5 parallelOutboundMigrationsPerNode Maximum number of outbound migrations per node. 2 bandwidthPerMigration Bandwidth limit of each migration, where the value is the quantity of bytes per second. For example, a value of 2048Mi means 2048 MiB/s. 0 [1] completionTimeoutPerGiB The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a virtual machine instance with 6GiB memory times out if it has not completed migration in 4800 seconds. If the Migration Method is BlockMigration , the size of the migrating disks is included in the calculation. 800 progressTimeout The migration is canceled if memory copy fails to make progress in this time, in seconds. 150 The default value of 0 is unlimited. 12.3. Migrating a virtual machine instance to another node Manually initiate a live migration of a virtual machine instance to another node using either the web console or the CLI. Note If a virtual machine uses a host model CPU, you can perform live migration of that virtual machine only between nodes that support its host CPU model. 12.3.1. Initiating live migration of a virtual machine instance in the web console Migrate a running virtual machine instance to a different node in the cluster. Note The Migrate action is visible to all users but only admin users can initiate a virtual machine migration. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. You can initiate the migration from this page, which makes it easier to perform actions on multiple virtual machines on the same page, or from the VirtualMachine details page where you can view comprehensive details of the selected virtual machine: Click the Options menu to the virtual machine and select Migrate . Click the virtual machine name to open the VirtualMachine details page and click Actions Migrate . Click Migrate to migrate the virtual machine to another node. 12.3.2. Initiating live migration of a virtual machine instance in the CLI Initiate a live migration of a running virtual machine instance by creating a VirtualMachineInstanceMigration object in the cluster and referencing the name of the virtual machine instance. Procedure Create a VirtualMachineInstanceMigration configuration file for the virtual machine instance to migrate. For example, vmi-migrate.yaml : apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora Create the object in the cluster by running the following command: USD oc create -f vmi-migrate.yaml The VirtualMachineInstanceMigration object triggers a live migration of the virtual machine instance. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted. 12.3.3. Additional resources Monitoring live migration Cancelling the live migration of a virtual machine instance 12.4. Migrating a virtual machine over a dedicated additional network You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. 12.4.1. Configuring a dedicated secondary network for virtual machine live migration To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition for the openshift-cnv namespace by using the CLI. Then, add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You logged in to the cluster as a user with the cluster-admin role. The Multus Container Network Interface (CNI) plugin is installed on the cluster. Every node on the cluster has at least two Network Interface Cards (NICs), and the NICs to be used for live migration are connected to the same VLAN. The virtual machine (VM) is running with the LiveMigrate eviction strategy. Procedure Create a NetworkAttachmentDefinition manifest. Example configuration file apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv 2 spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 3 "mode": "bridge", "ipam": { "type": "whereabouts", 4 "range": "10.200.5.0/24" 5 } }' 1 The name of the NetworkAttachmentDefinition object. 2 The namespace where the NetworkAttachmentDefinition object resides. This must be openshift-cnv . 3 The name of the NIC to be used for live migration. 4 The name of the CNI plugin that provides the network for this network attachment definition. 5 The IP address range for the secondary network. This range must not have any overlap with the IP addresses of the main network. Open the HyperConverged CR in your default editor by running the following command: oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR. For example: Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: my-secondary-network 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 ... 1 The name of the Multus NetworkAttachmentDefinition object to be used for live migrations. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network. Verification When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}' 12.4.2. Selecting a dedicated network by using the web console You can select a dedicated network for live migration by using the OpenShift Container Platform web console. Prerequisites You configured a Multus network for live migration. Procedure Navigate to Virtualization > Overview in the OpenShift Container Platform web console. Click the Settings tab and then click Live migration . Select the network from the Live migration network list. 12.4.3. Additional resources Live migration limits and timeouts 12.5. Cancelling the live migration of a virtual machine instance Cancel the live migration so that the virtual machine instance remains on the original node. You can cancel a live migration from either the web console or the CLI. 12.5.1. Cancelling live migration of a virtual machine instance in the web console You can cancel the live migration of a virtual machine instance in the web console. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click the Options menu beside a virtual machine and select Cancel Migration . 12.5.2. Cancelling live migration of a virtual machine instance in the CLI Cancel the live migration of a virtual machine instance by deleting the VirtualMachineInstanceMigration object associated with the migration. Procedure Delete the VirtualMachineInstanceMigration object that triggered the live migration, migration-job in this example: USD oc delete vmim migration-job 12.6. Configuring virtual machine eviction strategy The LiveMigrate eviction strategy ensures that a virtual machine instance is not interrupted if the node is placed into maintenance or drained. Virtual machines instances with this eviction strategy will be live migrated to another node. 12.6.1. Configuring custom virtual machines with the LiveMigration eviction strategy You only need to configure the LiveMigration eviction strategy on custom virtual machines. Common templates have this eviction strategy configured by default. Procedure Add the evictionStrategy: LiveMigrate option to the spec.template.spec section in the virtual machine configuration file. This example uses oc edit to update the relevant snippet of the VirtualMachine configuration file: USD oc edit vm <custom-vm> -n <my-namespace> apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate ... Restart the virtual machine for the update to take effect: USD virtctl restart <custom-vm> -n <my-namespace> 12.7. Configuring live migration policies You can define different migration configurations for specified groups of virtual machine instances (VMIs) by using a live migration policy. Important Live migration policy is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To configure a live migration policy by using the web console, see the MigrationPolicies page documentation . 12.7.1. Configuring a live migration policy from the command line Use the MigrationPolicy custom resource definition (CRD) to define migration policies for one or more groups of selected virtual machine instances (VMIs). You can specify groups of VMIs by using any combination of the following: Virtual machine instance labels such as size , os , gpu , and other VMI labels. Namespace labels such as priority , bandwidth , hpc-workload , and other namespace labels. For the policy to apply to a specific group of VMIs, all labels on the group of VMIs must match the labels in the policy. Note If multiple live migration policies apply to a VMI, the policy with the highest number of matching labels takes precedence. If multiple policies meet this criteria, the policies are sorted by lexicographic order of the matching labels keys, and the first one in that order takes precedence. Procedure Create a MigrationPolicy CRD for your specified group of VMIs. The following example YAML configures a group with the labels hpc-workloads:true , xyz-workloads-type: "" , workload-type: db , and operating-system: "" : apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: my-awesome-policy spec: # Migration Configuration allowAutoConverge: true bandwidthPerMigration: 217Ki completionTimeoutPerGiB: 23 allowPostCopy: false # Matching to VMIs selectors: namespaceSelector: 1 hpc-workloads: "True" xyz-workloads-type: "" virtualMachineInstanceSelector: 2 workload-type: "db" operating-system: "" 1 Use namespaceSelector to define a group of VMIs by using namespace labels. 2 Use virtualMachineInstanceSelector to define a group of VMIs by using VMI labels. | [
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150",
"apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora",
"oc create -f vmi-migrate.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv 2 spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 3 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 4 \"range\": \"10.200.5.0/24\" 5 } }'",
"edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: my-secondary-network 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150",
"get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'",
"oc delete vmim migration-job",
"oc edit vm <custom-vm> -n <my-namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate",
"virtctl restart <custom-vm> -n <my-namespace>",
"apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: my-awesome-policy spec: # Migration Configuration allowAutoConverge: true bandwidthPerMigration: 217Ki completionTimeoutPerGiB: 23 allowPostCopy: false # Matching to VMIs selectors: namespaceSelector: 1 hpc-workloads: \"True\" xyz-workloads-type: \"\" virtualMachineInstanceSelector: 2 workload-type: \"db\" operating-system: \"\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/virtualization/live-migration |
Chapter 7. Storing Data Grid Server credentials in keystores | Chapter 7. Storing Data Grid Server credentials in keystores External services require credentials to authenticate with Data Grid Server. To protect sensitive text strings such as passwords, add them to a credential keystore rather than directly in Data Grid Server configuration files. You can then configure Data Grid Server to decrypt passwords for establishing connections with services such as databases or LDAP directories. Important Plain-text passwords in USDRHDG_HOME/server/conf are unencrypted. Any user account with read access to the host filesystem can view plain-text passwords. While credential keystores are password-protected store encrypted passwords, any user account with write access to the host filesystem can tamper with the keystore itself. To completely secure Data Grid Server credentials, you should grant read-write access only to user accounts that can configure and run Data Grid Server. 7.1. Setting up credential keystores Create keystores that encrypt credential for Data Grid Server access. A credential keystore contains at least one alias that is associated with an encrypted password. After you create a keystore, you specify the alias in a connection configuration such as a database connection pool. Data Grid Server then decrypts the password for that alias from the keystore when the service attempts authentication. You can create as many credential keystores with as many aliases as required. Note As a security best practice, keystores should be readable only by the user who runs the process for Data Grid Server. Procedure Open a terminal in USDRHDG_HOME . Create a keystore and add credentials to it with the credentials command. Tip By default, keystores are of type PKCS12. Run help credentials for details on changing keystore defaults. The following example shows how to create a keystore that contains an alias of "dbpassword" for the password "changeme". When you create a keystore you also specify a password to access the keystore with the -p argument. Linux Microsoft Windows Check that the alias is added to the keystore. Open your Data Grid Server configuration for editing. Configure Data Grid to use the credential keystore. Add a credential-stores section to the security configuration. Specify the name and location of the credential keystore. Specify the password to access the credential keystore with the clear-text-credential configuration. Note Instead of adding a clear-text password for the credential keystore to your Data Grid Server configuration you can use an external command or masked password for additional security. You can also use a password in one credential store as the master password for another credential store. Reference the credential keystore in configuration that Data Grid Server uses to connect with an external system such as a datasource or LDAP server. Add a credential-reference section. Specify the name of the credential keystore with the store attribute. Specify the password alias with the alias attribute. Tip Attributes in the credential-reference configuration are optional. store is required only if you have multiple keystores. alias is required only if the keystore contains multiple password aliases. Save the changes to your configuration. 7.2. Securing passwords for credential keystores Data Grid Server requires a password to access credential keystores. You can add that password to Data Grid Server configuration in clear text or, as an added layer of security, you can use an external command for the password or you can mask the password. Prerequisites Set up a credential keystore for Data Grid Server. Procedure Do one of the following: Use the credentials mask command to obscure the password, for example: Masked passwords use Password Based Encryption (PBE) and must be in the following format in your Data Grid Server configuration: <MASKED_VALUE;SALT;ITERATION>. Use an external command that provides the password as standard output. An external command can be any executable, such as a shell script or binary, that uses java.lang.Runtime#exec(java.lang.String) . If the command requires parameters, provide them as a space-separated list of strings. 7.3. Credential keystore configuration You can add credential keystores to Data Grid Server configuration and use clear-text passwords, masked passwords, or external commands that supply passwords. Credential keystore with a clear text password XML <server xmlns="urn:infinispan:server:14.0"> <security> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <clear-text-credential clear-text="secret1234!"/> </credential-store> </credential-stores> </security> </server> JSON { "server": { "security": { "credential-stores": [{ "name": "credentials", "path": "credentials.pfx", "clear-text-credential": { "clear-text": "secret1234!" } }] } } } YAML server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: "secret1234!" Credential keystore with a masked password XML <server xmlns="urn:infinispan:server:14.0"> <security> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <masked-credential masked="1oTMDZ5JQj6DVepJviXMnX;pepper99;100"/> </credential-store> </credential-stores> </security> </server> JSON { "server": { "security": { "credential-stores": [{ "name": "credentials", "path": "credentials.pfx", "masked-credential": { "masked": "1oTMDZ5JQj6DVepJviXMnX;pepper99;100" } }] } } } YAML server: security: credentialStores: - name: credentials path: credentials.pfx maskedCredential: masked: "1oTMDZ5JQj6DVepJviXMnX;pepper99;100" External command passwords XML <server xmlns="urn:infinispan:server:14.0"> <security> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <command-credential command="/path/to/executable.sh arg1 arg2"/> </credential-store> </credential-stores> </security> </server> JSON { "server": { "security": { "credential-stores": [{ "name": "credentials", "path": "credentials.pfx", "command-credential": { "command": "/path/to/executable.sh arg1 arg2" } }] } } } YAML server: security: credentialStores: - name: credentials path: credentials.pfx commandCredential: command: "/path/to/executable.sh arg1 arg2" 7.4. Credential keystore references After you add credential keystores to Data Grid Server you can reference them in connection configurations. Datasource connections XML <server xmlns="urn:infinispan:server:14.0"> <security> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <clear-text-credential clear-text="secret1234!"/> </credential-store> </credential-stores> </security> <data-sources> <data-source name="postgres" jndi-name="jdbc/postgres"> <!-- Specifies the database username in the connection factory. --> <connection-factory driver="org.postgresql.Driver" username="dbuser" url="USD{org.infinispan.server.test.postgres.jdbcUrl}"> <!-- Specifies the credential keystore that contains an encrypted password and the alias for it. --> <credential-reference store="credentials" alias="dbpassword"/> </connection-factory> <connection-pool max-size="10" min-size="1" background-validation="1000" idle-removal="1" initial-size="1" leak-detection="10000"/> </data-source> </data-sources> </server> JSON { "server": { "security": { "credential-stores": [{ "name": "credentials", "path": "credentials.pfx", "clear-text-credential": { "clear-text": "secret1234!" } }], "data-sources": [{ "name": "postgres", "jndi-name": "jdbc/postgres", "connection-factory": { "driver": "org.postgresql.Driver", "username": "dbuser", "url": "USD{org.infinispan.server.test.postgres.jdbcUrl}", "credential-reference": { "store": "credentials", "alias": "dbpassword" } } }] } } } YAML server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: "secret1234!" dataSources: - name: postgres jndiName: jdbc/postgres connectionFactory: driver: org.postgresql.Driver username: dbuser url: 'USD{org.infinispan.server.test.postgres.jdbcUrl}' credentialReference: store: credentials alias: dbpassword LDAP connections XML <server xmlns="urn:infinispan:server:14.0"> <security> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <clear-text-credential clear-text="secret1234!"/> </credential-store> </credential-stores> <security-realms> <security-realm name="default"> <!-- Specifies the LDAP principal in the connection factory. --> <ldap-realm name="ldap" url="ldap://my-ldap-server:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org"> <!-- Specifies the credential keystore that contains an encrypted password and the alias for it. --> <credential-reference store="credentials" alias="ldappassword"/> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "credential-stores": [{ "name": "credentials", "path": "credentials.pfx", "clear-text-credential": { "clear-text": "secret1234!" } }], "security-realms": [{ "name": "default", "ldap-realm": { "name": "ldap", "url": "ldap://my-ldap-server:10389", "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "credential-reference": { "store": "credentials", "alias": "ldappassword" } } }] } } } YAML server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: "secret1234!" securityRealms: - name: "default" ldapRealm: name: ldap url: 'ldap://my-ldap-server:10389' principal: 'uid=admin,ou=People,dc=infinispan,dc=org' credentialReference: store: credentials alias: ldappassword | [
"bin/cli.sh credentials add dbpassword -c changeme -p \"secret1234!\"",
"bin\\cli.bat credentials add dbpassword -c changeme -p \"secret1234!\"",
"bin/cli.sh credentials ls -p \"secret1234!\" dbpassword",
"bin/cli.sh credentials mask -i 100 -s pepper99 \"secret1234!\"",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <clear-text-credential clear-text=\"secret1234!\"/> </credential-store> </credential-stores> </security> </server>",
"{ \"server\": { \"security\": { \"credential-stores\": [{ \"name\": \"credentials\", \"path\": \"credentials.pfx\", \"clear-text-credential\": { \"clear-text\": \"secret1234!\" } }] } } }",
"server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: \"secret1234!\"",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <masked-credential masked=\"1oTMDZ5JQj6DVepJviXMnX;pepper99;100\"/> </credential-store> </credential-stores> </security> </server>",
"{ \"server\": { \"security\": { \"credential-stores\": [{ \"name\": \"credentials\", \"path\": \"credentials.pfx\", \"masked-credential\": { \"masked\": \"1oTMDZ5JQj6DVepJviXMnX;pepper99;100\" } }] } } }",
"server: security: credentialStores: - name: credentials path: credentials.pfx maskedCredential: masked: \"1oTMDZ5JQj6DVepJviXMnX;pepper99;100\"",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <command-credential command=\"/path/to/executable.sh arg1 arg2\"/> </credential-store> </credential-stores> </security> </server>",
"{ \"server\": { \"security\": { \"credential-stores\": [{ \"name\": \"credentials\", \"path\": \"credentials.pfx\", \"command-credential\": { \"command\": \"/path/to/executable.sh arg1 arg2\" } }] } } }",
"server: security: credentialStores: - name: credentials path: credentials.pfx commandCredential: command: \"/path/to/executable.sh arg1 arg2\"",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <clear-text-credential clear-text=\"secret1234!\"/> </credential-store> </credential-stores> </security> <data-sources> <data-source name=\"postgres\" jndi-name=\"jdbc/postgres\"> <!-- Specifies the database username in the connection factory. --> <connection-factory driver=\"org.postgresql.Driver\" username=\"dbuser\" url=\"USD{org.infinispan.server.test.postgres.jdbcUrl}\"> <!-- Specifies the credential keystore that contains an encrypted password and the alias for it. --> <credential-reference store=\"credentials\" alias=\"dbpassword\"/> </connection-factory> <connection-pool max-size=\"10\" min-size=\"1\" background-validation=\"1000\" idle-removal=\"1\" initial-size=\"1\" leak-detection=\"10000\"/> </data-source> </data-sources> </server>",
"{ \"server\": { \"security\": { \"credential-stores\": [{ \"name\": \"credentials\", \"path\": \"credentials.pfx\", \"clear-text-credential\": { \"clear-text\": \"secret1234!\" } }], \"data-sources\": [{ \"name\": \"postgres\", \"jndi-name\": \"jdbc/postgres\", \"connection-factory\": { \"driver\": \"org.postgresql.Driver\", \"username\": \"dbuser\", \"url\": \"USD{org.infinispan.server.test.postgres.jdbcUrl}\", \"credential-reference\": { \"store\": \"credentials\", \"alias\": \"dbpassword\" } } }] } } }",
"server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: \"secret1234!\" dataSources: - name: postgres jndiName: jdbc/postgres connectionFactory: driver: org.postgresql.Driver username: dbuser url: 'USD{org.infinispan.server.test.postgres.jdbcUrl}' credentialReference: store: credentials alias: dbpassword",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <clear-text-credential clear-text=\"secret1234!\"/> </credential-store> </credential-stores> <security-realms> <security-realm name=\"default\"> <!-- Specifies the LDAP principal in the connection factory. --> <ldap-realm name=\"ldap\" url=\"ldap://my-ldap-server:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\"> <!-- Specifies the credential keystore that contains an encrypted password and the alias for it. --> <credential-reference store=\"credentials\" alias=\"ldappassword\"/> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"credential-stores\": [{ \"name\": \"credentials\", \"path\": \"credentials.pfx\", \"clear-text-credential\": { \"clear-text\": \"secret1234!\" } }], \"security-realms\": [{ \"name\": \"default\", \"ldap-realm\": { \"name\": \"ldap\", \"url\": \"ldap://my-ldap-server:10389\", \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"credential-reference\": { \"store\": \"credentials\", \"alias\": \"ldappassword\" } } }] } } }",
"server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: \"secret1234!\" securityRealms: - name: \"default\" ldapRealm: name: ldap url: 'ldap://my-ldap-server:10389' principal: 'uid=admin,ou=People,dc=infinispan,dc=org' credentialReference: store: credentials alias: ldappassword"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/credential-keystores |
Chapter 3. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode | Chapter 3. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode Red Hat OpenShift Data Foundation can use an externally hosted Red Hat Ceph Storage (RHCS) cluster as the storage provider on Red Hat OpenStack Platform. See Planning your deployment for more information. For instructions regarding how to install a RHCS cluster, see the installation guide . Follow these steps to deploy OpenShift Data Foundation in external mode: Install the OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating an OpenShift Data foundation Cluster for external mode You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on Red Hat OpenStack platform. Prerequisites Ensure the OpenShift Container Platform version is 4.12 or above before deploying OpenShift Data Foundation 4.12. OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS PVC creation in external mode. For more details, see Troubleshooting CephFS PVC creation in external mode . Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access . Red Hat recommends that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation. The external Ceph cluster should have an existing RBD pool pre-configured for use. If it does not exist, contact your Red Hat Ceph Storage administrator to create one before you move ahead with OpenShift Data Foundation deployment. Red Hat recommends to use a separate pool for each OpenShift Data Foundation cluster. Procedure Click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation Create Instance link of Storage Cluster. Select Mode as External . By default, Internal is selected as deployment mode. Figure 3.1. Connect to external cluster section on Create Storage Cluster form In the Connect to external cluster section, click on the Download Script link to download the python script for extracting Ceph cluster details. For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with admin key . Run the following command on the RHCS node to view the list of available arguments. Important Use python instead of python3 if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster. Note You can also run the script from inside a MON container (containerized deployment) or from a MON node (rpm deployment). To retrieve the external cluster details from the RHCS cluster, run the following command For example: In the above example, --rbd-data-pool-name is a mandatory parameter used for providing block storage in OpenShift Data Foundation. --rgw-endpoint is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> --monitoring-endpoint is optional. It is the IP address of the active ceph-mgr reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. -- run-as-user is an optional parameter used for providing a name for the Ceph user which is created by the script. If this parameter is not specified, a default user name client.healthchecker is created. The permissions for the new user is set as: caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool= RGW_POOL_PREFIX.rgw.meta , allow r pool= .rgw.root , allow rw pool= RGW_POOL_PREFIX.rgw.control , allow rx pool= RGW_POOL_PREFIX.rgw.log , allow x pool= RGW_POOL_PREFIX.rgw.buckets.index Example of JSON output generated using the python script: Save the JSON output to a file with .json extension Note For OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation. Click External cluster metadata Browse to select and upload the JSON file. The content of the JSON file is populated and displayed in the text box. Figure 3.2. Json file content Click Create . The Create button is enabled only after you upload the .json file. Verification steps Verify that the final Status of the installed storage cluster shows as Phase: Ready with a green tick mark. Click Operators Installed Operators Storage Cluster link to view the storage cluster installation status. Alternatively, when you are on the Operator Details tab, you can click on the Storage Cluster tab to view the status. To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation . 3.3. Verifying your OpenShift Data Foundation installation for external mode Use this section to verify that OpenShift Data Foundation is deployed correctly. 3.3.1. Verifying the state of the pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, "Pods corresponding to OpenShift Data Foundation components" Verify that the following pods are in running state: Table 3.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) Note If an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created. rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) 3.3.2. Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that both Storage Cluster and Data Resiliency have a green tick. In the Details card, verify that the cluster information is displayed as follows. + Service Name:: OpenShift Data Foundation Cluster Name:: ocs-external-storagecluster Provider:: OpenStack Mode:: External Version:: ocs-operator-4.15.0 For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3.3. Verifying that the Multicloud Object Gateway is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed. Note The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details were included while deploying OpenShift Data Foundation in external mode. For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation . 3.3.4. Verifying that the storage classes are created and listed Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-external-storagecluster-ceph-rbd ocs-external-storagecluster-ceph-rgw ocs-external-storagecluster-cephfs openshift-storage.noobaa.io Note If MDS is not deployed in the external cluster, ocs-external-storagecluster-cephfs storage class will not be created. If RGW is not deployed in the external cluster, the ocs-external-storagecluster-ceph-rgw storage class will not be created. For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation 3.3.5. Verifying that Ceph cluster is connected Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster. 3.3.6. Verifying that storage cluster is ready Run the following command to verify if the storage cluster is ready and the External option is set to true. 3.4. Uninstalling OpenShift Data Foundation 3.4.1. Uninstalling OpenShift Data Foundation from external storage system Use the steps in this section to uninstall OpenShift Data Foundation. Uninstalling OpenShift Data Foundation does not remove the RBD pool from the external cluster, or uninstall the external Red Hat Ceph Storage cluster. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful Note The uninstall.ocs.openshift.io/cleanup-policy is not applicable for external mode. The below table provides information on the different values that can used with these annotations: Table 3.2. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user mode forced No Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively You can change the uninstall mode by editing the value of the annotation by using the following commands: Prerequisites Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation. Procedure Delete the volume snapshots that are using OpenShift Data Foundation. List the volume snapshots from all the namespaces From the output of the command, identify and delete the volume snapshots that are using OpenShift Data Foundation. Delete PVCs and OBCs that are using OpenShift Data Foundation. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted. If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation. See Removing monitoring stack from OpenShift Data Foundation Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation. Removing OpenShift Container Platform registry from OpenShift Data Foundation Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation. Removing the cluster logging operator from OpenShift Data Foundation Delete other PVCs and OBCs provisioned using OpenShift Data Foundation. Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs and OBCs that are used internally by OpenShift Data Foundation. Delete the OBCs. Delete the PVCs. Ensure that you have removed any custom backing stores, bucket classes, and so on that are created in the cluster. Delete the Storage Cluster object and wait for the removal of the associated resources. Delete the namespace and wait until the deletion is complete. You will need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Data Foundation, if the namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Confirm all PVs provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the Released state, delete it. Remove CustomResourceDefinitions . To ensure that OpenShift Data Foundation is uninstalled completely: In the OpenShift Container Platform Web Console, click Storage . Verify that OpenShift Data Foundation no longer appears under Storage. 3.4.2. Removing monitoring stack from OpenShift Data Foundation Use this section to clean up the monitoring stack from OpenShift Data Foundation. The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use the OpenShift Container Platform monitoring stack. For information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Edit the monitoring configmap . Remove any config sections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Data Foundation PVCs. List the pods consuming the PVC. In this example, the alertmanagerMain and prometheusK8s pods that were consuming the PVCs are in the Terminating state. You can delete the PVCs once these pods are no longer using OpenShift Data Foundation PVC. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. 3.4.3. Removing OpenShift Container Platform registry from OpenShift Data Foundation Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see image registry The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry should have been configured to use an OpenShift Data Foundation PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. 3.4.4. Removing the cluster logging operator from OpenShift Data Foundation Use this section to clean up the cluster logging operator from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete the PVCs. <pvc-name> Is the name of the PVC | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"python3 ceph-external-cluster-details-exporter.py --help",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> [optional arguments]",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"client.healthchecker\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"ceph-rbd\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}]",
"oc get cephcluster -n openshift-storage",
"NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH ocs-external-storagecluster-cephcluster 31m15s Connected Cluster connected successfully HEALTH_OK",
"oc get storagecluster -n openshift-storage",
"NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 31m15s Ready true 2021-02-29T20:43:04Z 4.15.0",
"oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode=\"forced\" --overwrite storagecluster.ocs.openshift.io/ocs-external-storagecluster annotated",
"oc get volumesnapshot --all-namespaces",
"oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>",
"#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done",
"oc delete obc <obc name> -n <project name>",
"oc delete pvc <pvc name> -n <project-name>",
"oc delete -n openshift-storage storagesystem --all --wait=true",
"oc project default oc delete project openshift-storage --wait=true --timeout=5m",
"oc get project openshift-storage",
"oc get pv oc delete pv <pv name>",
"oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m",
"oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .",
". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .",
"oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Terminating 0 10h pod/alertmanager-main-1 3/3 Terminating 0 10h pod/alertmanager-main-2 3/3 Terminating 0 10h pod/cluster-monitoring-operator-84cd9df668-zhjfn 1/1 Running 0 18h pod/grafana-5db6fd97f8-pmtbf 2/2 Running 0 10h pod/kube-state-metrics-895899678-z2r9q 3/3 Running 0 10h pod/node-exporter-4njxv 2/2 Running 0 18h pod/node-exporter-b8ckz 2/2 Running 0 11h pod/node-exporter-c2vp5 2/2 Running 0 18h pod/node-exporter-cq65n 2/2 Running 0 18h pod/node-exporter-f5sm7 2/2 Running 0 11h pod/node-exporter-f852c 2/2 Running 0 18h pod/node-exporter-l9zn7 2/2 Running 0 11h pod/node-exporter-ngbs8 2/2 Running 0 18h pod/node-exporter-rv4v9 2/2 Running 0 18h pod/openshift-state-metrics-77d5f699d8-69q5x 3/3 Running 0 10h pod/prometheus-adapter-765465b56-4tbxx 1/1 Running 0 10h pod/prometheus-adapter-765465b56-s2qg2 1/1 Running 0 10h pod/prometheus-k8s-0 6/6 Terminating 1 9m47s pod/prometheus-k8s-1 6/6 Terminating 1 9m47s pod/prometheus-operator-cbfd89f9-ldnwc 1/1 Running 0 43m pod/telemeter-client-7b5ddb4489-2xfpz 3/3 Running 0 10h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-0 Bound pvc-2eb79797-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-1 Bound pvc-2ebeee54-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-2 Bound pvc-2ec6a9cf-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-0 Bound pvc-3162a80c-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-1 Bound pvc-316e99e2-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h",
"oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m",
"oc edit configs.imageregistry.operator.openshift.io",
". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .",
". . . storage: emptyDir: {} . . .",
"oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m",
"oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m",
"oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/deploying_openshift_data_foundation_on_red_hat_openstack_platform_in_external_mode |
Developing and compiling your Red Hat build of Quarkus applications with Apache Maven | Developing and compiling your Red Hat build of Quarkus applications with Apache Maven Red Hat build of Quarkus 3.8 Red Hat Customer Content Services | [
"<!-- Configure the Red Hat build of Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>",
"<activeProfile>red-hat-enterprise-maven-repository</activeProfile>",
"mvn --version",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.8.6.SP3-redhat-00002:create -DprojectGroupId=<project_group_id> -DprojectArtifactId=<project_artifact_id> -DplatformGroupId=com.redhat.quarkus.platform -DplatformArtifactId=quarkus-bom -DplatformVersion=3.8.6.SP3-redhat-00002 -DpackageName=getting.started",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.8.6.SP3-redhat-00002:create",
"quarkus create app my-groupId:my-artifactId --package-name=getting.started",
"quarkus create app --help",
"<properties> <compiler-plugin.version>3.11.0</compiler-plugin.version> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.version>3.8.6.SP3-redhat-00002</quarkus.platform.version> <surefire-plugin.version>3.1.2</surefire-plugin.version> <skipITs>true</skipITs> </properties>",
"<dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<build> <plugins> <plugin> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>USD{compiler-plugin.version}</version> <configuration> <compilerArgs> <arg>-parameters</arg> </compilerArgs> </configuration> </plugin> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build>",
"<build> <plugins> <plugin> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner </native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> </plugins> </build> <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles>",
"cd <directory_name>",
"./mvnw quarkus:dev",
"quarkus dev",
"<plugin> <groupId>com.redhat.quarkus.platform</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <configuration> <source>USD{maven.compiler.source}</source> <target>USD{maven.compiler.target}</target> <compilerArgs> <arg>-verbose</arg> </compilerArgs> <jvmArgs>-verbose</jvmArgs> </configuration> </plugin>",
"./mvnw quarkus:list-extensions",
"quarkus extension --installable",
"./mvnw quarkus:add-extension -Dextensions=\"<extension>\"",
"./mvnw quarkus:add-extension -Dextensions=\"io.quarkus:quarkus-agroal\"",
"quarkus extension add '<extension>'",
"./mvnw quarkus:add-extension -Dextensions=agroal",
"[SUCCESS] ✅ Extension io.quarkus:quarkus-agroal has been installed",
"quarkus extension add 'agroal'",
"./mvnw quarkus:dev",
"quarkus dev",
"./mvnw quarkus:dev",
"quarkus dev",
"./mvnw quarkus:dev",
"quarkus dev",
"message=Hello %test.message=Test Value",
"mvn test -Dquarkus.test.profile=__<profile-name>__",
"./mvnw quarkus:dependency-tree",
"[INFO] └─ io.quarkus:quarkus-resteasy-deployment:jar:3.8.6.SP3-redhat-00002 (compile) [INFO] ├─ io.quarkus:quarkus-resteasy-server-common-deployment:jar:3.8.6.SP3-redhat-00002 (compile) [INFO] │ ├─ io.quarkus:quarkus-resteasy-common-deployment:jar:3.8.6.SP3-redhat-00002 (compile) [INFO] │ │ ├─ io.quarkus:quarkus-resteasy-common:jar:3.8.6.SP3-redhat-00002 (compile) [INFO] │ │ │ ├─ org.jboss.resteasy:resteasy-core:jar:6.2.4.Final-redhat-00003 (compile) [INFO] │ │ │ │ ├─ jakarta.xml.bind:jakarta.xml.bind-api:jar:4.0.0.redhat-00008 (compile) [INFO] │ │ │ │ ├─ org.jboss.resteasy:resteasy-core-spi:jar:6.2.4.Final-redhat-00003 (compile) [INFO] │ │ │ │ ├─ org.reactivestreams:reactive-streams:jar:1.0.4.redhat-00003 (compile) [INFO] │ │ │ │ └─ com.ibm.async:asyncutil:jar:0.1.0.redhat-00010 (compile)",
"<profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles>",
"./mvnw package -Dnative -Dquarkus.native.container-build=true",
"./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman",
"quarkus build --native -Dquarkus.native.container-build=true",
"quarkus build --native -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman",
"./target/*-runner",
"./mvnw package -Dnative",
"quarkus build --native",
"./target/*-runner",
"FROM registry.access.redhat.com/ubi8/ubi-minimal:8.9 WORKDIR /work/ RUN chown 1001 /work && chmod \"g+rwX\" /work && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 ENTRYPOINT [\"./application\", \"-Dquarkus.http.host=0.0.0.0\"]",
"registry.access.redhat.com/ubi8/ubi:8.9",
"registry.access.redhat.com/ubi8/ubi-minimal:8.9",
"./mvnw package -Dnative -Dquarkus.native.container-build=true",
"./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman",
"docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started",
"build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started",
"docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started",
"run -i --rm -p 8080:8080 quarkus-quickstart/getting-started",
"<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin>",
"package org.acme; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest 1 public class GreetingResourceIT extends GreetingResourceTest { 2 // Execute the same tests but in native mode. }",
"./mvnw verify -Dnative",
"./mvnw verify -Dnative . GraalVM Native Image: Generating 'getting-started-1.0.0-SNAPSHOT-runner' (executable) ======================================================================================================================== [1/8] Initializing... (6.6s @ 0.22GB) Java version: 17.0.7+7, vendor version: Mandrel-23.1.0.0-Final Graal compiler: optimization level: 2, target machine: x86-64-v3 C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Serial GC (max heap size: 80% of RAM) 2 user-specific feature(s) - io.quarkus.runner.Feature: Auto-generated class by Red Hat build of Quarkus from the existing extensions - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase [2/8] Performing analysis... [******] (40.0s @ 2.05GB) 10,318 (86.40%) of 11,942 types reachable 15,064 (57.36%) of 26,260 fields reachable 52,128 (55.75%) of 93,501 methods reachable 3,298 types, 109 fields, and 2,698 methods registered for reflection 63 types, 68 fields, and 55 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/8] Building universe... (5.9s @ 1.31GB) [4/8] Parsing methods... [**] (3.7s @ 2.08GB) [5/8] Inlining methods... [***] (2.0s @ 1.92GB) [6/8] Compiling methods... [******] (34.4s @ 3.25GB) [7/8] Layouting methods... [[7/8] Layouting methods... [**] (4.1s @ 1.78GB) [8/8] Creating image... [**] (4.5s @ 2.31GB) 20.93MB (48.43%) for code area: 33,233 compilation units 21.95MB (50.80%) for image heap: 285,664 objects and 8 resources 337.06kB ( 0.76%) for other data 43.20MB in total . [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:integration-test (default) @ getting-started --- [INFO] Using auto detected provider org.apache.maven.surefire.junitplatform.JUnitPlatformProvider [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.acme.GreetingResourceIT __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ 2024-06-27 14:04:52,681 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Quarkus 3.8.6.SP3-redhat-00002) started in 0.038s. Listening on: http://0.0.0.0:8081 2024-06-27 14:04:52,682 INFO [io.quarkus] (main) Profile prod activated. 2024-06-27 14:04:52,682 INFO [io.quarkus] (main) Installed features: [cdi, resteasy-reactive, smallrye-context-propagation, vertx] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.696 s - in org.acme.GreetingResourceIT [INFO] [INFO] Results: [INFO] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:verify (default) @ getting-started ---",
"./mvnw verify -Dnative -Dquarkus.test.wait-time= <duration>",
"./mvnw quarkus:dev",
"quarkus dev",
"./mvnw quarkus:dev -Dsuspend",
"./mvnw quarkus:dev -Ddebug=false",
"./mvnw quarkus:dev -Dsuspend",
"quarkus dev --suspend",
"./mvnw quarkus:dev -DdebugHost=<host-ip-address>",
"quarkus dev --debug-host=<host-ip-address>",
"0.0.0.0"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html-single/developing_and_compiling_your_red_hat_build_of_quarkus_applications_with_apache_maven/index |
6.4. RHEA-2015:1418 - new packages: python-requests and dependencies | 6.4. RHEA-2015:1418 - new packages: python-requests and dependencies A new python-requests package and its dependencies, python-chardet, python-urllib3, python-six, python-backports, and python-backports-ssl_match_hostname, are now available for Red Hat Enterprise Linux 6. The python-requests package contains a library designed to make HTTP requests easy for developers. This enhancement update adds the python-requests package and its dependencies to Red Hat Enterprise Linux 6. The following packages are now available from the base channels in Red Hat Network: python-requests, python-chardet, python-urllib3, python-six, python-backports, and python-backports-ssl_match_hostname. (BZ# 1176248 , BZ# 1176251 , BZ# 1176257 , BZ# 1176258 , BZ# 1183141 , BZ# 1183146 ) All users who require python-requests, python-chardet, python-urllib3, python-six, python-backports, and python-backports-ssl_match_hostname are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-rhea-2015-1418 |
Chapter 2. ClusterRoleBinding [rbac.authorization.k8s.io/v1] | Chapter 2. ClusterRoleBinding [rbac.authorization.k8s.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject. Type object Required roleRef 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. roleRef object RoleRef contains information that points to the role being used subjects array Subjects holds references to the objects the role applies to. subjects[] object Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. 2.1.1. .roleRef Description RoleRef contains information that points to the role being used Type object Required apiGroup kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.2. .subjects Description Subjects holds references to the objects the role applies to. Type array 2.1.3. .subjects[] Description Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. Type object Required kind name Property Type Description apiGroup string APIGroup holds the API group of the referenced subject. Defaults to "" for ServiceAccount subjects. Defaults to "rbac.authorization.k8s.io" for User and Group subjects. kind string Kind of object being referenced. Values defined by this API group are "User", "Group", and "ServiceAccount". If the Authorizer does not recognized the kind value, the Authorizer should report an error. name string Name of the object being referenced. namespace string Namespace of the referenced object. If the object kind is non-namespace, such as "User" or "Group", and this value is not empty the Authorizer should report an error. 2.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings DELETE : delete collection of ClusterRoleBinding GET : list or watch objects of kind ClusterRoleBinding POST : create a ClusterRoleBinding /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings GET : watch individual changes to a list of ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/{name} DELETE : delete a ClusterRoleBinding GET : read the specified ClusterRoleBinding PATCH : partially update the specified ClusterRoleBinding PUT : replace the specified ClusterRoleBinding /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings/{name} GET : watch changes to an object of kind ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/rbac.authorization.k8s.io/v1/clusterrolebindings Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterRoleBinding Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ClusterRoleBinding Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRoleBinding Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body ClusterRoleBinding schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 202 - Accepted ClusterRoleBinding schema 401 - Unauthorized Empty 2.2.2. /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the ClusterRoleBinding Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterRoleBinding Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRoleBinding Table 2.17. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRoleBinding Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRoleBinding Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body ClusterRoleBinding schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 401 - Unauthorized Empty 2.2.4. /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the ClusterRoleBinding Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/rbac_apis/clusterrolebinding-rbac-authorization-k8s-io-v1 |
Chapter 129. Spring WebService | Chapter 129. Spring WebService Since Camel 2.6 Both producer and consumer are supported The Spring WS component allows you to integrate with Spring Web Services . It offers both, client -side support for accessing web services, and server -side support for creating your own contract-first web services. 129.1. Dependencies When using spring-ws with Red Hat build of Camel Spring Boot use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-ws</artifactId> <!-- use the same version as your Camel core version --> </dependency> Use the BOM to get the version. <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 129.2. URI format The URI scheme for this component is as follows To expose a web service mapping-type needs to be set to any of the following: Mapping type Description rootqname Offers the option to map web service requests based on the qualified name of the root element contained in the message. soapaction Used to map web service requests based on the SOAP action specified in the header of the message. uri In order to map web service requests that target a specific URI. xpathresult Used to map web service requests based on the evaluation of an XPath expression against the incoming message. The result of the evaluation should match the XPath result specified in the endpoint URI. beanname Allows you to reference an org.apache.camel.component.spring.ws.bean.CamelEndpointDispatcher object in order to integrate with existing (legacy) endpoint mappings like PayloadRootQNameEndpointMapping , SoapActionEndpointMapping , etc As a consumer the address should contain a value relevant to the specified mapping-type (e.g. a SOAP action, XPath expression). As a producer the address should be set to the URI of the web service your calling upon. 129.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 129.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 129.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 129.4. Component Options The Spring WebService component supports 4 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 129.5. Endpoint Options The Spring WebService endpoint is configured using URI syntax: Following are the path and query parameters: 129.5.1. Path Parameters (4 parameters) Name Description Default Type type (consumer) Endpoint mapping type if endpoint mapping is used. rootqname - Offers the option to map web service requests based on the qualified name of the root element contained in the message. soapaction - Used to map web service requests based on the SOAP action specified in the header of the message. uri - In order to map web service requests that target a specific URI. xpathresult - Used to map web service requests based on the evaluation of an XPath expression against the incoming message. The result of the evaluation should match the XPath result specified in the endpoint URI. beanname - Allows you to reference an org.apache.camel.component.spring.ws.bean.CamelEndpointDispatcher object in order to integrate with existing (legacy) endpoint mappings like PayloadRootQNameEndpointMapping, SoapActionEndpointMapping, etc. Enum values: ROOT_QNAME ACTION TO SOAP_ACTION XPATHRESULT URI URI_PATH BEANNAME EndpointMappingType lookupKey (consumer) Endpoint mapping key if endpoint mapping is used. String webServiceEndpointUri (producer) The default Web Service endpoint uri to use for the producer. String expression (consumer) The XPath expression to use when option type=xpathresult. Then this option is required to be configured. String 129.5.2. Query Parameters (21 parameters) Name Description Default Type messageFilter (common) Option to provide a custom MessageFilter. For example when you want to process your headers or attachments by your own. MessageFilter messageIdStrategy (common) Option to provide a custom MessageIdStrategy to control generation of WS-Addressing unique message ids. MessageIdStrategy endpointDispatcher (consumer) Spring org.springframework.ws.server.endpoint.MessageEndpoint for dispatching messages received by Spring-WS to a Camel endpoint, to integrate with existing (legacy) endpoint mappings like PayloadRootQNameEndpointMapping, SoapActionEndpointMapping, etc. CamelEndpointDispatcher endpointMapping (consumer) Reference to an instance of org.apache.camel.component.spring.ws.bean.CamelEndpointMapping in the Registry/ApplicationContext. Only one bean is required in the registry to serve all Camel/Spring-WS endpoints. This bean is auto-discovered by the MessageDispatcher and used to map requests to Camel endpoints based on characteristics specified on the endpoint (like root QName, SOAP action, etc). CamelSpringWSEndpointMapping bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern allowResponseAttachmentOverride (producer) Option to override soap response attachments in in/out exchange with attachments from the actual service layer. If the invoked service appends or rewrites the soap attachments this option when set to true, allows the modified soap attachments to be overwritten in in/out message attachments. false boolean allowResponseHeaderOverride (producer) Option to override soap response header in in/out exchange with header info from the actual service layer. If the invoked service appends or rewrites the soap header this option when set to true, allows the modified soap header to be overwritten in in/out message headers. false boolean faultAction (producer) Signifies the value for the faultAction response WS-Addressing Fault Action header that is provided by the method. See org.springframework.ws.soap.addressing.server.annotation.Action annotation for more details. URI faultTo (producer) Signifies the value for the faultAction response WS-Addressing FaultTo header that is provided by the method. See org.springframework.ws.soap.addressing.server.annotation.Action annotation for more details. URI messageFactory (producer) Option to provide a custom WebServiceMessageFactory. For example when you want Apache Axiom to handle web service messages instead of SAAJ. WebServiceMessageFactory messageSender (producer) Option to provide a custom WebServiceMessageSender. For example to perform authentication or use alternative transports. WebServiceMessageSender outputAction (producer) Signifies the value for the response WS-Addressing Action header that is provided by the method. See org.springframework.ws.soap.addressing.server.annotation.Action annotation for more details. URI replyTo (producer) Signifies the value for the replyTo response WS-Addressing ReplyTo header that is provided by the method. See org.springframework.ws.soap.addressing.server.annotation.Action annotation for more details. URI soapAction (producer) SOAP action to include inside a SOAP request when accessing remote web services. String timeout (producer) Sets the socket read timeout (in milliseconds) while invoking a webservice using the producer, see URLConnection.setReadTimeout() and CommonsHttpMessageSender.setReadTimeout(). This option works when using the built-in message sender implementations: CommonsHttpMessageSender and HttpUrlConnectionMessageSender. One of these implementations will be used by default for HTTP based services unless you customize the Spring WS configuration options supplied to the component. If you are using a non-standard sender, it is assumed that you will handle your own timeout configuration. The built-in message sender HttpComponentsMessageSender is considered instead of CommonsHttpMessageSender which has been deprecated, see HttpComponentsMessageSender.setReadTimeout(). int webServiceTemplate (producer) Option to provide a custom WebServiceTemplate. This allows for full control over client-side web services handling; like adding a custom interceptor or specifying a fault resolver, message sender or message factory. WebServiceTemplate wsAddressingAction (producer) WS-Addressing 1.0 action header to include when accessing web services. The To header is set to the address of the web service as specified in the endpoint URI (default Spring-WS behavior). URI lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters 129.6. Message Headers The Spring WebService component supports 7 message headers that are listed below: Name Description Default Type CamelSpringWebserviceEndpointUri (producer) Constant: SPRING_WS_ENDPOINT_URI The endpoint URI. String CamelSpringWebserviceSoapAction (producer) Constant: SPRING_WS_SOAP_ACTION SOAP action to include inside a SOAP request when accessing remote web services. String CamelSpringWebserviceSoapHeader (producer) Constant: SPRING_WS_SOAP_HEADER The soap header source. Source CamelSpringWebserviceAddressingAction (producer) Constant: SPRING_WS_ADDRESSING_ACTION WS-Addressing 1.0 action header to include when accessing web services. The To header is set to the address of the web service as specified in the endpoint URI (default Spring-WS behavior). URI CamelSpringWebserviceAddressingFaultTo (producer) Constant: SPRING_WS_ADDRESSING_PRODUCER_FAULT_TO Signifies the value for the faultAction response WS-Addressing FaultTo header that is provided by the method. See org.springframework.ws.soap.addressing.server.annotation.Action annotation for more details. URI CamelSpringWebserviceAddressingReplyTo (producer) Constant: SPRING_WS_ADDRESSING_PRODUCER_REPLY_TO Signifies the value for the replyTo response WS-Addressing ReplyTo header that is provided by the method. See org.springframework.ws.soap.addressing.server.annotation.Action annotation for more details. URI breadcrumbId (consumer) Constant: BREADCRUMB_ID The breadcrumb id. String 129.7. Accessing web services To call a web service at http://foo.com/bar simply define a route: from("direct:example").to("spring-ws:http://foo.com/bar") and send a message: template.requestBody("direct:example", "<foobar xmlns=\"http://foo.com\"><msg>test message</msg></foobar>"); If you are calling a SOAP service, you must not include SOAP tags. Spring-WS performs the XML-to-SOAP marshaling. 129.8. Sending SOAP and WS-Addressing action headers When a remote web service requires a SOAP action or use of the WS-Addressing standard, you define your route as: from("direct:example") .to("spring-ws:http://foo.com/bar?soapAction=http://foo.com&wsAddressingAction=http://bar.com") You can also override the endpoint options with header values. template.requestBodyAndHeader("direct:example", "<foobar xmlns=\"http://foo.com\"><msg>test message</msg></foobar>", SpringWebserviceConstants.SPRING_WS_SOAP_ACTION, "http://baz.com"); 129.9. Using SOAP headers You can provide the SOAP header as a Camel Message header when sending a message to a spring-ws endpoint. For example given the following SOAP header in a String: String body = ... String soapHeader = "<h:Header xmlns:h=\"http://www.webserviceX.NET/\"><h:MessageID>1234567890</h:MessageID><h:Nested><h:NestedID>1111</h:NestedID></h:Nested></h:Header>"; We can set the body and header on the Camel Message as follows: exchange.getIn().setBody(body); exchange.getIn().setHeader(SpringWebserviceConstants.SPRING_WS_SOAP_HEADER, soapHeader); And then send the Exchange to a spring-ws endpoint to call the Web Service. Similarly, the spring-ws consumer also enriches the Camel Message with the SOAP header. For an example see this unit test . 129.10. The header and attachment propagation Spring WS Camel supports propagation of the headers and attachments into Spring-WS WebServiceMessage response. The endpoint uses a "hook" with the MessageFilter (default implementation is provided by BasicMessageFilter) to propagate the exchange headers and attachments into WebServiceMessage response. exchange.getOut().getHeaders().put("myCustom","myHeaderValue") exchange.getIn().addAttachment("myAttachment", new DataHandler(...)) If the exchange header in the pipeline contains text, it generates Qname(key)=value attribute in the soap header. You must create a QName class directly and put any key into header. 129.11. How to transform the soap header using a stylesheet The header transformation filter (HeaderTransformationMessageFilter.java) can be used to transform the soap header for a soap request. If you want to use the header transformation filter, see the below example: <bean id="headerTransformationFilter" class="org.apache.camel.component.spring.ws.filter.impl.HeaderTransformationMessageFilter"> <constructor-arg index="0" value="org/apache/camel/component/spring/ws/soap-header-transform.xslt"/> </bean> Use the bead defined above in the camel endpoint <route> <from uri="direct:stockQuoteWebserviceHeaderTransformation"/> <to uri="spring-ws:http://localhost?webServiceTemplate=#webServiceTemplate&soapAction=http://www.stockquotes.edu/GetQuote&messageFilter=#headerTransformationFilter"/> </route> 129.12. How to use MTOM attachments The BasicMessageFilter provides all required information for Apache Axiom in order to produce MTOM message. If you want to use Apache Camel Spring WS within Apache Axiom, here is an example: - Define the messageFactory as shown below, and Spring-WS populates your SOAP message with optimized attachments through an MTOM strategy. <bean id="axiomMessageFactory" class="org.springframework.ws.soap.axiom.AxiomSoapMessageFactory"> <property name="payloadCaching" value="false" /> <property name="attachmentCaching" value="true" /> <property name="attachmentCacheThreshold" value="1024" /> </bean> Add into your pom.xml the following dependencies <dependency> <groupId>org.apache.ws.commons.axiom</groupId> <artifactId>axiom-api</artifactId> <version>1.2.13</version> </dependency> <dependency> <groupId>org.apache.ws.commons.axiom</groupId> <artifactId>axiom-impl</artifactId> <version>1.2.13</version> <scope>runtime</scope> </dependency> Add your attachment into the pipeline, for example using a Processor implementation. private class Attachement implements Processor { public void process(Exchange exchange) throws Exception { exchange.getOut().copyFrom(exchange.getIn()); File file = new File("testAttachment.txt"); exchange.getOut().addAttachment("test", new DataHandler(new FileDataSource(file))); } } Define endpoint (producer) as ussual, for example like this: from("direct:send") .process(new Attachement()) .to("spring-ws:http://localhost:8089/mySoapService?soapAction=mySoap&messageFactory=axiomMessageFactory"); Your producer now generates MTOM messages with optimized attachments. 129.13. The custom header and attachment filtering If you need to provide your custom processing of either headers or attachments, extend existing BasicMessageFilter and override the appropriate methods or write a brand new implementation of the MessageFilter interface. To use your custom filter, add either a global or a local message filter into your spring context. a) the global custom filter that provides the global configuration for all Spring-WS endpoints <bean id="messageFilter" class="your.domain.myMessageFiler" scope="singleton" /> or b) the local messageFilter directly on the endpoint as follows: to("spring-ws:http://yourdomain.com?messageFilter=#myEndpointSpecificMessageFilter"); For more information see CAMEL-5724 If you want to create your own MessageFilter, consider overriding the following methods in the default implementation of MessageFilter in class BasicMessageFilter: protected void doProcessSoapHeader(Message inOrOut, SoapMessage soapMessage) {your code /*no need to call super*/ } protected void doProcessSoapAttachements(Message inOrOut, SoapMessage response) { your code /*no need to call super*/ } 129.14. Using a custom MessageSender and MessageFactory A custom message sender or factory in the registry can be referenced like this: from("direct:example") .to("spring-ws:http://foo.com/bar?messageFactory=#messageFactory&messageSender=#messageSender") Spring configuration: <!-- authenticate using HTTP Basic Authentication --> <bean id="messageSender" class="org.springframework.ws.transport.http.HttpComponentsMessageSender"> <property name="credentials"> <bean class="org.apache.commons.httpclient.UsernamePasswordCredentials"> <constructor-arg index="0" value="admin"/> <constructor-arg index="1" value="secret"/> </bean> </property> </bean> <!-- force use of Sun SAAJ implementation, http://static.springsource.org/spring-ws/sites/1.5/faq.html#saaj-jboss --> <bean id="messageFactory" class="org.springframework.ws.soap.saaj.SaajSoapMessageFactory"> <property name="messageFactory"> <bean class="com.sun.xml.messaging.saaj.soap.ver1_1.SOAPMessageFactory1_1Impl"/> </property> </bean> 129.15. Exposing web services To expose a web service using this component, you first must set up a MessageDispatcher to look for endpoint mappings in a Spring XML file. If you want to run inside a servlet container, you must use a MessageDispatcherServlet configured in web.xml . By default the MessageDispatcherServlet will look for a Spring XML named /WEB-INF/spring-ws-servlet.xml . To use Camel with Spring-WS the only mandatory bean in that XML file is CamelEndpointMapping . This bean allows the MessageDispatcher to dispatch web service requests to your routes. web.xml <web-app> <servlet> <servlet-name>spring-ws</servlet-name> <servlet-class>org.springframework.ws.transport.http.MessageDispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>spring-ws</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app> spring-ws-servlet.xml <bean id="endpointMapping" class="org.apache.camel.component.spring.ws.bean.CamelEndpointMapping" /> <bean id="wsdl" class="org.springframework.ws.wsdl.wsdl11.DefaultWsdl11Definition"> <property name="schema"> <bean class="org.springframework.xml.xsd.SimpleXsdSchema"> <property name="xsd" value="/WEB-INF/foobar.xsd"/> </bean> </property> <property name="portTypeName" value="FooBar"/> <property name="locationUri" value="/"/> <property name="targetNamespace" value="http://example.com/"/> </bean> More information on setting up Spring-WS can be found in Writing Contract-First Web Services . Basically paragraph 3.6 "Implementing the Endpoint" is handled by this component (specifically paragraph 3.6.2 "Routing the Message to the Endpoint" is where CamelEndpointMapping comes in). See the Spring Web Services Example included in the Camel distribution. 129.16. Endpoint mapping in routes With the XML configuration in-place, you can now use Camel's DSL to define what web service requests are handled by your endpoint: The following route receives all web service requests that have a root element named "GetFoo" within the http://example.com/ namespace. from("spring-ws:rootqname:{http://example.com/}GetFoo?endpointMapping=#endpointMapping") .convertBodyTo(String.class).to(mock:example) The following route receives web service requests containing the http://example.com/GetFoo SOAP action. from("spring-ws:soapaction:http://example.com/GetFoo?endpointMapping=#endpointMapping") .convertBodyTo(String.class).to(mock:example) The following route receives all requests sent to http://example.com/foobar . from("spring-ws:uri:http://example.com/foobar?endpointMapping=#endpointMapping") .convertBodyTo(String.class).to(mock:example) The route below receives requests that contain the element <foobar>abc</foobar> anywhere inside the message (and the default namespace). from("spring-ws:xpathresult:abc?expression=//foobar&endpointMapping=#endpointMapping") .convertBodyTo(String.class).to(mock:example) 129.16.1. Alternative configuration, using existing endpoint mappings For every endpoint with mapping-type beanname one bean of type CamelEndpointDispatcher with a corresponding name is required in the Registry/ApplicationContext. This bean acts as a bridge between the Camel endpoint and an existing endpoint mapping like PayloadRootQNameEndpointMapping . The use of the beanname mapping-type is primarily meant for (legacy) situations where you are already using Spring-WS and have endpoint mappings defined in a Spring XML file. The beanname mapping-type allows you to wire your Camel route into an existing endpoint mapping. When you are starting from the beginning, you must define your endpoint mappings as Camel URI's (as illustrated above with endpointMapping ) since it requires less configuration and is more expressive. You can also use vanilla Spring-WS with the help of annotations. An example of a route using beanname : <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="spring-ws:beanname:QuoteEndpointDispatcher" /> <to uri="mock:example" /> </route> </camelContext> <bean id="legacyEndpointMapping" class="org.springframework.ws.server.endpoint.mapping.PayloadRootQNameEndpointMapping"> <property name="mappings"> <props> <prop key="{http://example.com/}GetFuture">FutureEndpointDispatcher</prop> <prop key="{http://example.com/}GetQuote">QuoteEndpointDispatcher</prop> </props> </property> </bean> <bean id="QuoteEndpointDispatcher" class="org.apache.camel.component.spring.ws.bean.CamelEndpointDispatcher" /> <bean id="FutureEndpointDispatcher" class="org.apache.camel.component.spring.ws.bean.CamelEndpointDispatcher" /> 129.17. POJO (un)marshalling Camel's pluggable data formats offer support for pojo/xml marshalling using libraries such as JAXB, XStream, JibX, Castor and XMLBeans. You can use these data formats in your route to sent and receive pojo's to and from web services. When accessing web services you can marshal the request and unmarshal the response message: JaxbDataFormat jaxb = new JaxbDataFormat(false); jaxb.setContextPath("com.example.model"); from("direct:example").marshal(jaxb).to("spring-ws:http://foo.com/bar").unmarshal(jaxb); Similarly, when providing web services, you can unmarshal XML requests to POJOs and marshal the response message back to XML: from("spring-ws:rootqname:{http://example.com/}GetFoo?endpointMapping=#endpointMapping").unmarshal(jaxb) .to("mock:example").marshal(jaxb); 129.18. Spring Boot Auto-Configuration The component supports 5 options that are listed below. Name Description Default Type camel.component.spring-ws.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.spring-ws.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.spring-ws.enabled Whether to enable auto configuration of the spring-ws component. This is enabled by default. Boolean camel.component.spring-ws.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages using Camel's routing error handlers. When the first message is processed then creating and starting the producer takes time and prolong the total processing time of the processing. false Boolean camel.component.spring-ws.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-ws</artifactId> <!-- use the same version as your Camel core version --> </dependency>",
"<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"spring-ws:[mapping-type:]address[?options]",
"spring-ws:type:lookupKey:webServiceEndpointUri",
"from(\"direct:example\").to(\"spring-ws:http://foo.com/bar\")",
"template.requestBody(\"direct:example\", \"<foobar xmlns=\\\"http://foo.com\\\"><msg>test message</msg></foobar>\");",
"from(\"direct:example\") .to(\"spring-ws:http://foo.com/bar?soapAction=http://foo.com&wsAddressingAction=http://bar.com\")",
"template.requestBodyAndHeader(\"direct:example\", \"<foobar xmlns=\\\"http://foo.com\\\"><msg>test message</msg></foobar>\", SpringWebserviceConstants.SPRING_WS_SOAP_ACTION, \"http://baz.com\");",
"String body = String soapHeader = \"<h:Header xmlns:h=\\\"http://www.webserviceX.NET/\\\"><h:MessageID>1234567890</h:MessageID><h:Nested><h:NestedID>1111</h:NestedID></h:Nested></h:Header>\";",
"exchange.getIn().setBody(body); exchange.getIn().setHeader(SpringWebserviceConstants.SPRING_WS_SOAP_HEADER, soapHeader);",
"exchange.getOut().getHeaders().put(\"myCustom\",\"myHeaderValue\") exchange.getIn().addAttachment(\"myAttachment\", new DataHandler(...))",
"<bean id=\"headerTransformationFilter\" class=\"org.apache.camel.component.spring.ws.filter.impl.HeaderTransformationMessageFilter\"> <constructor-arg index=\"0\" value=\"org/apache/camel/component/spring/ws/soap-header-transform.xslt\"/> </bean>",
"<route> <from uri=\"direct:stockQuoteWebserviceHeaderTransformation\"/> <to uri=\"spring-ws:http://localhost?webServiceTemplate=#webServiceTemplate&soapAction=http://www.stockquotes.edu/GetQuote&messageFilter=#headerTransformationFilter\"/> </route>",
"<bean id=\"axiomMessageFactory\" class=\"org.springframework.ws.soap.axiom.AxiomSoapMessageFactory\"> <property name=\"payloadCaching\" value=\"false\" /> <property name=\"attachmentCaching\" value=\"true\" /> <property name=\"attachmentCacheThreshold\" value=\"1024\" /> </bean>",
"<dependency> <groupId>org.apache.ws.commons.axiom</groupId> <artifactId>axiom-api</artifactId> <version>1.2.13</version> </dependency> <dependency> <groupId>org.apache.ws.commons.axiom</groupId> <artifactId>axiom-impl</artifactId> <version>1.2.13</version> <scope>runtime</scope> </dependency>",
"private class Attachement implements Processor { public void process(Exchange exchange) throws Exception { exchange.getOut().copyFrom(exchange.getIn()); File file = new File(\"testAttachment.txt\"); exchange.getOut().addAttachment(\"test\", new DataHandler(new FileDataSource(file))); } }",
"from(\"direct:send\") .process(new Attachement()) .to(\"spring-ws:http://localhost:8089/mySoapService?soapAction=mySoap&messageFactory=axiomMessageFactory\");",
"<bean id=\"messageFilter\" class=\"your.domain.myMessageFiler\" scope=\"singleton\" />",
"to(\"spring-ws:http://yourdomain.com?messageFilter=#myEndpointSpecificMessageFilter\");",
"protected void doProcessSoapHeader(Message inOrOut, SoapMessage soapMessage) {your code /*no need to call super*/ } protected void doProcessSoapAttachements(Message inOrOut, SoapMessage response) { your code /*no need to call super*/ }",
"from(\"direct:example\") .to(\"spring-ws:http://foo.com/bar?messageFactory=#messageFactory&messageSender=#messageSender\")",
"<!-- authenticate using HTTP Basic Authentication --> <bean id=\"messageSender\" class=\"org.springframework.ws.transport.http.HttpComponentsMessageSender\"> <property name=\"credentials\"> <bean class=\"org.apache.commons.httpclient.UsernamePasswordCredentials\"> <constructor-arg index=\"0\" value=\"admin\"/> <constructor-arg index=\"1\" value=\"secret\"/> </bean> </property> </bean> <!-- force use of Sun SAAJ implementation, http://static.springsource.org/spring-ws/sites/1.5/faq.html#saaj-jboss --> <bean id=\"messageFactory\" class=\"org.springframework.ws.soap.saaj.SaajSoapMessageFactory\"> <property name=\"messageFactory\"> <bean class=\"com.sun.xml.messaging.saaj.soap.ver1_1.SOAPMessageFactory1_1Impl\"/> </property> </bean>",
"<web-app> <servlet> <servlet-name>spring-ws</servlet-name> <servlet-class>org.springframework.ws.transport.http.MessageDispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>spring-ws</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app>",
"<bean id=\"endpointMapping\" class=\"org.apache.camel.component.spring.ws.bean.CamelEndpointMapping\" /> <bean id=\"wsdl\" class=\"org.springframework.ws.wsdl.wsdl11.DefaultWsdl11Definition\"> <property name=\"schema\"> <bean class=\"org.springframework.xml.xsd.SimpleXsdSchema\"> <property name=\"xsd\" value=\"/WEB-INF/foobar.xsd\"/> </bean> </property> <property name=\"portTypeName\" value=\"FooBar\"/> <property name=\"locationUri\" value=\"/\"/> <property name=\"targetNamespace\" value=\"http://example.com/\"/> </bean>",
"from(\"spring-ws:rootqname:{http://example.com/}GetFoo?endpointMapping=#endpointMapping\") .convertBodyTo(String.class).to(mock:example)",
"from(\"spring-ws:soapaction:http://example.com/GetFoo?endpointMapping=#endpointMapping\") .convertBodyTo(String.class).to(mock:example)",
"from(\"spring-ws:uri:http://example.com/foobar?endpointMapping=#endpointMapping\") .convertBodyTo(String.class).to(mock:example)",
"from(\"spring-ws:xpathresult:abc?expression=//foobar&endpointMapping=#endpointMapping\") .convertBodyTo(String.class).to(mock:example)",
"<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"spring-ws:beanname:QuoteEndpointDispatcher\" /> <to uri=\"mock:example\" /> </route> </camelContext> <bean id=\"legacyEndpointMapping\" class=\"org.springframework.ws.server.endpoint.mapping.PayloadRootQNameEndpointMapping\"> <property name=\"mappings\"> <props> <prop key=\"{http://example.com/}GetFuture\">FutureEndpointDispatcher</prop> <prop key=\"{http://example.com/}GetQuote\">QuoteEndpointDispatcher</prop> </props> </property> </bean> <bean id=\"QuoteEndpointDispatcher\" class=\"org.apache.camel.component.spring.ws.bean.CamelEndpointDispatcher\" /> <bean id=\"FutureEndpointDispatcher\" class=\"org.apache.camel.component.spring.ws.bean.CamelEndpointDispatcher\" />",
"JaxbDataFormat jaxb = new JaxbDataFormat(false); jaxb.setContextPath(\"com.example.model\"); from(\"direct:example\").marshal(jaxb).to(\"spring-ws:http://foo.com/bar\").unmarshal(jaxb);",
"from(\"spring-ws:rootqname:{http://example.com/}GetFoo?endpointMapping=#endpointMapping\").unmarshal(jaxb) .to(\"mock:example\").marshal(jaxb);"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-spring-webservice-component-starter |
19.6. Setting Account Lockout Policies | 19.6. Setting Account Lockout Policies A brute force attack occurs when a malefactor attempts to guess a password by simply slamming the server with multiple login attempts. An account lockout policy prevents brute force attacks by blocking an account from logging into the system after a certain number of login failures - even if the correct password is subsequently entered. Note A user account can be manually unlocked by an administrator using the ipa user-unlock . Refer to Section 9.6, "Unlocking User Accounts After Password Failures" . 19.6.1. In the UI These attributes are available in the password policy form when a group-level password policy is created or when any password policy (including the global password policy) is edited. Click the Policy tab, and then click the Password Policies subtab. Click the name of the policy to edit. Set the account lockout attribute values. There are three parts to the account lockout policy: The number of failed login attempts before the account is locked ( Max Failures ). The time after a failed login attempt before the counter resets ( Failure reset interval ). Since mistakes do happen honestly, the count of failed attempts is not kept forever; it naturally lapses after a certain amount of time. This is in seconds. How long an account is locked after the max number of failures is reached ( Lockout duration ). This is in seconds. 19.6.2. In the CLI There are three parts to the account lockout policy: The number of failed login attempts before the account is locked ( --maxfail ). How long an account is locked after the max number of failures is reached ( --lockouttime ). This is in seconds. The time after a failed login attempt before the counter resets ( --failinterval ). Since mistakes do happen honestly, the count of failed attempts is not kept forever; it naturally lapses after a certain amount of time. This is in seconds. These account lockout attributes can all be set when a password policy is created with pwpolicy-add or added later using pwpolicy-mod . For example: | [
"[jsmith@ipaserver ~]USD kinit admin [jsmith@ipaserver ~]USD ipa pwpolicy-mod examplegroup --maxfail=4 --lockouttime=600 --failinterval=30"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/Setting_Account-lockout_Policies |
Chapter 2. Additional release notes | Chapter 2. Additional release notes Release notes for additional related components and products not included in the core OpenShift Container Platform 4.18 release notes are available in the following documentation. Important The following release notes are for downstream Red Hat products only; upstream or community release notes for related products are not included. A AWS Load Balancer Operator B Builds for Red Hat OpenShift C cert-manager Operator for Red Hat OpenShift Cluster Observability Operator (COO) Compliance Operator Custom Metrics Autoscaler Operator D Red Hat Developer Hub Operator E External DNS Operator F File Integrity Operator K Kube Descheduler Operator L Logging M Migration Toolkit for Containers (MTC) N Network Observability Operator Network-bound Disk Encryption (NBDE) Tang Server Operator O OpenShift API for Data Protection (OADP) Red Hat OpenShift Dev Spaces Red Hat OpenShift distributed tracing platform Red Hat OpenShift GitOps Red Hat OpenShift Local (Upstream CRC documentation) Red Hat OpenShift Pipelines OpenShift sandboxed containers Red Hat OpenShift Serverless Red Hat OpenShift Service Mesh 2.x Red Hat OpenShift Service Mesh 3.x Red Hat OpenShift support for Windows Containers Red Hat OpenShift Virtualization Red Hat build of OpenTelemetry P Power monitoring for Red Hat OpenShift R Run Once Duration Override Operator S Secondary Scheduler Operator for Red Hat OpenShift Security Profiles Operator | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/release_notes/addtl-release-notes |
Chapter 133. KafkaConnector schema reference | Chapter 133. KafkaConnector schema reference Property Property type Description spec KafkaConnectorSpec The specification of the Kafka Connector. status KafkaConnectorStatus The status of the Kafka Connector. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaConnector-reference |
Chapter 2. Required custom resource upgrades | Chapter 2. Required custom resource upgrades Debezium is a Kafka connector plugin that is deployed to an Apache Kafka cluster that runs on AMQ Streams on OpenShift. To prepare for OpenShift CRD v1 , in the current version of AMQ Streams the required version of the custom resource definitions (CRD) API is now set to v1beta2 . The v1beta2 version of the API replaces the previously supported v1beta1 and v1alpha1 API versions. Support for the v1alpha1 and v1beta1 API versions is now deprecated in AMQ Streams. Those earlier versions are now removed from most AMQ Streams custom resources, including the KafkaConnect and KafkaConnector resources that you use to configure Debezium connectors. The CRDs that are based on the v1beta2 API version use the OpenAPI structural schema. Custom resources based on the superseded v1alpha1 or v1beta1 APIs do not support structural schemas, and are incompatible with the current version of AMQ Streams. Before you upgrade to AMQ Streams2.5, you must upgrade existing custom resources to use API version kafka.strimzi.io/v1beta2 . You can upgrade custom resources any time after you upgrade to AMQ Streams 1.7. You must complete the upgrade to the v1beta2 API before you upgrade to AMQ Streams2.5 or newer. To facilitate the upgrade of CRDs and custom resources, AMQ Streams provides an API conversion tool that automatically upgrades them to a format that is compatible with v1beta2 . For more information about the tool and for the complete instructions about how to upgrade AMQ Streams, see Deploying and Upgrading AMQ Streams on OpenShift . Note The requirement to update custom resources applies only to Debezium deployments that run on AMQ Streams on OpenShift. The requirement does not apply to Debezium on Red Hat Enterprise Linux | null | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/debezium_user_guide/debezium-required-custom-resource-upgrades |
Chapter 1. Introduction to OpenStack networking | Chapter 1. Introduction to OpenStack networking The Networking service (neutron) is the software-defined networking (SDN) component of Red Hat OpenStack Platform (RHOSP). The RHOSP Networking service manages internal and external traffic to and from virtual machine instances and provides core services such as routing, segmentation, DHCP, and metadata. It provides the API for virtual networking capabilities and management of switches, routers, ports, and firewalls. 1.1. Managing your RHOSP networks With the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) you can effectively meet your site's networking goals. You can: Provide connectivity to VM instances within a project. Project networks primarily enable general (non-privileged) projects to manage networks without involving administrators. These networks are entirely virtual and require virtual routers to interact with other project networks and external networks such as the Internet. Project networks also usually provide DHCP and metadata services to instances. RHOSP supports the following project network types: flat, VLAN, VXLAN, GRE, and GENEVE. For more information, see Managing project networks . Connect VM instances to networks outside of a project. Provider networks provide connectivity like project networks. But only administrative (privileged) users can manage those networks because they interface with the physical network infrastructure. RHOSP supports the following provider network types: flat and VLAN. Inside project networks, you can use pools of floating IP addresses or a single floating IP address to direct ingress traffic to your VM instances. Using bridge mappings, you can associate a physical network name (an interface label) to a bridge created with OVS or OVN to allow provider network traffic to reach the physical network. For more information, see Connecting VM instances to physical networks . Create a network that is optimized for the edge. Operators can create routed provider networks that are typically used in edge deployments, and rely on multiple layer 2 network segments instead of traditional networks that have only one segment. Routed provider networks simplify the cloud for end users because they see only one network. For cloud operators, routed provider networks deliver scalabilty and fault tolerance. For example, if a major error occurs, only one segment is impacted instead of the entire network failing. For more information, see Deploying routed provider networks . Make your network resources highly available. You can use availability zones (AZs) and Virtual Router Redundancy Protocol (VRRP) to keep your network resources highly available. Operators group network nodes that are attached to different power sources on different AZs. , operators schedule crucial services such as DHCP, L3, FW, and so on to be on separate AZs. RHOSP uses VRRP to make project routers and floating IP addresses highly available. An alternative to centralized routing, Distributed Virtual Routing (DVR) offers an alternative routing design based on VRRP that deploys the L3 agent and schedules routers on every Compute node. For more information, see Using availability zones to make network resources highly available . Secure your network at the port level. Security groups provide a container for virtual firewall rules that control ingress (inbound to instances) and egress (outbound from instances) network traffic at the port level. Security groups use a default deny policy and only contain rules that allow specific traffic. Each port can reference one or more security groups in an additive fashion. The firewall driver translates security group rules to a configuration for the underlying packet filtering technology such as iptables. For more information, see Configuring shared security groups . Manage port traffic. With allowed address pairs you identify a specific MAC address, IP address, or both to allow network traffic to pass through a port regardless of the subnet. When you define allowed address pairs, you are able to use protocols like VRRP (Virtual Router Redundancy Protocol) that float an IP address between two VM instances to enable fast data plane failover. For more information, see Configuring allowed address pairs . Optimize large overlay networks. Using the L2 Population driver you can enable broadcast, multicast, and unicast traffic to scale out on large overlay networks. For more information, see Configuring the L2 population driver . Set ingress and egress limits for traffic on VM instances. You can offer varying service levels for instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic. You can apply QoS policies to individual ports. You can also apply QoS policies to a project network, where ports with no specific policy attached inherit the policy. For more information, see Configuring Quality of Service (QoS) policies . Manage the amount of network resources RHOSP projects can create. With the Networking service quota options you can set limits on the amount of network resources project users can create. This includes resources such as ports, subnets, networks, and so on. For more information, see Managing project quotas . Optimize your VM instances for Network Functions Virtualization (NFV). Instances can send and receive VLAN-tagged traffic over a single virtual NIC. This is particularly useful for NFV applications (VNFs) that expect VLAN-tagged traffic, allowing a single virtual NIC to serve multiple customers or services. In a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags are transferred over the network and consumed by the VM instances on the same VLAN, and ignored by other instances and devices. VLAN trunks support VLAN-aware instances by combining VLANs into a single trunked port. For more information, see VLAN-aware instances . Control which projects can attach instances to a shared network. Using role-based access control (RBAC) policies in the RHOSP Networking service, cloud administrators can remove the ability for some projects to create networks and can instead allow them to attach to pre-existing networks that correspond to their project. For more information, see Configuring RBAC policies . 1.2. Networking service components The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) includes the following components: API server The RHOSP networking API includes support for Layer 2 networking and IP Address Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing between Layer 2 networks and gateways to external networks. RHOSP networking includes a growing list of plug-ins that enable interoperability with various commercial and open source network technologies, including routers, switches, virtual switches and software-defined networking (SDN) controllers. Modular Layer 2 (ML2) plug-in and agents ML2 plugs and unplugs ports, creates networks or subnets, and provides IP addressing. Messaging queue Accepts and routes RPC requests between agents to complete API operations. Message queue is used in the ML2 plug-in for RPC between the neutron server and neutron agents that run on each hypervisor, in the ML2 mechanism drivers for Open vSwitch and Linux bridge. 1.3. Modular Layer 2 (ML2) networking Modular Layer 2 (ML2) is the Red Hat OpenStack Platform (RHOSP) networking core plug-in. The ML2 modular design enables the concurrent operation of mixed network technologies through mechanism drivers. Open Virtual Network (OVN) is the default mechanism driver used with ML2. The ML2 framework distinguishes between the two kinds of drivers that can be configured: Type drivers Define how an RHOSP network is technically realized. Each available network type is managed by an ML2 type driver, and they maintain any required type-specific network state. Validating the type-specific information for provider networks, type drivers are responsible for the allocation of a free segment in project networks. Examples of type drivers are GENEVE, GRE, VXLAN, and so on. Mechanism drivers Define the mechanism to access an RHOSP network of a certain type. The mechanism driver takes the information established by the type driver and applies it to the networking mechanisms that have been enabled. Examples of mechanism drivers are Open Virtual Networking (OVN) and Open vSwitch (OVS). Mechanism drivers can employ L2 agents, and by using RPC interact directly with external devices or controllers. You can use multiple mechanism and type drivers simultaneously to access different ports of the same virtual network. Additional resources Section 1.8, "Modular Layer 2 (ML2) type and mechanism driver compatibility" 1.4. ML2 network types You can operate multiple network segments at the same time. ML2 supports the use and interconnection of multiple network segments. You don't have to bind a port to a network segment because ML2 binds ports to segements with connectivity. Depending on the mechanism driver, ML2 supports the following network segment types: Flat VLAN GENEVE tunnels VXLAN and GRE tunnels Flat All virtual machine (VM) instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation occurs. VLAN With RHOSP networking users can create multiple provider or project networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers and other network infrastructure on the same Layer 2 VLAN. You can use VLANs to segment network traffic for computers running on the same switch. This means that you can logically divide your switch by configuring the ports to be members of different networks - they are basically mini-LANs that you can use to separate traffic for security reasons. For example, if your switch has 24 ports in total, you can assign ports 1-6 to VLAN200, and ports 7-18 to VLAN201. As a result, computers connected to VLAN200 are completely separate from those on VLAN201; they cannot communicate directly, and if they wanted to, the traffic must pass through a router as if they were two separate physical switches. Firewalls can also be useful for governing which VLANs can communicate with each other. GENEVE tunnels GENEVE recognizes and accommodates changing capabilities and needs of different devices in network virtualization. It provides a framework for tunneling rather than being prescriptive about the entire system. Geneve defines the content of the metadata flexibly that is added during encapsulation and tries to adapt to various virtualization scenarios. It uses UDP as its transport protocol and is dynamic in size using extensible option headers. Geneve supports unicast, multicast, and broadcast. The GENEVE type driver is compatible with the ML2/OVN mechanism driver. VXLAN and GRE tunnels VXLAN and GRE use network overlays to support private communication between instances. An RHOSP networking router is required to enable traffic to traverse outside of the GRE or VXLAN project network. A router is also required to connect directly-connected project networks with external networks, including the internet; the router provides the ability to connect to instances directly from an external network using floating IP addresses. VXLAN and GRE type drivers are compatible with the ML2/OVS mechanism driver. Additional resources Section 1.8, "Modular Layer 2 (ML2) type and mechanism driver compatibility" 1.5. Modular Layer 2 (ML2) mechanism drivers Modular Layer 2 (ML2) plug-ins are implemented as mechanisms with a common code base. This approach enables code reuse and eliminates much of the complexity around code maintenance and testing. You enable mechanism drivers using the Orchestration service (heat) parameter, NeutronMechanismDrivers . Here is an example from a heat custom environment file: The order in which you specify the mechanism drivers matters. In the earlier example, if you want to bind a port using the baremetal mechanism driver, then you must specify baremetal before ansible . Otherwise, the ansible driver will bind the port, because it precedes baremetal in the list of values for NeutronMechanismDrivers . Red Hat chose ML2/OVN as the default mechanism driver for all new deployments starting with RHOSP 15 because it offers immediate advantages over the ML2/OVS mechanism driver for most customers today. Those advantages multiply with each release while we continue to enhance and improve the ML2/OVN feature set. Support is available for the deprecated ML2/OVS mechanism driver through the RHOSP 17 releases. During this time, the ML2/OVS driver remains in maintenance mode, receiving bug fixes and normal support, and most new feature development happens in the ML2/OVN mechanism driver. In RHOSP 18.0, Red Hat plans to completely remove the ML2/OVS mechanism driver and stop supporting it. If your existing Red Hat OpenStack Platform (RHOSP) deployment uses the ML2/OVS mechanism driver, start now to evaluate a plan to migrate to the mechanism driver. Migration is supported in RHOSP 16.2 and will be supported in RHOSP 17.1. Migration tools are available in RHOSP 17.0 for test purposes only. Red Hat requires that you file a proactive support case before attempting a migration from ML2/OVS to ML2/OVN. Red Hat does not support migrations without the proactive support case. See How to open a proactive case for a planned activity on Red Hat OpenStack Platform? Additional resources Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide 1.6. Open vSwitch Open vSwitch (OVS) is a software-defined networking (SDN) virtual switch similar to the Linux software bridge. OVS provides switching services to virtualized networks with support for industry standard OpenFlow and sFlow. OVS can also integrate with physical switches using layer 2 features, such as STP, LACP, and 802.1Q VLAN tagging. Open vSwitch version 1.11.0-1.el6 or later also supports tunneling with VXLAN and GRE. Note To mitigate the risk of network loops in OVS, only a single interface or a single bond can be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges. Additional resources Network Interface Bonding in the Director Installation and Usage guide. 1.7. Open Virtual Network (OVN) Open Virtual Network (OVN), is a system to support logical network abstraction in virtual machine and container environments. Sometimes called open source virtual networking for Open vSwitch, OVN complements the existing capabilities of OVS to add native support for logical network abstractions, such as logical L2 and L3 overlays, security groups and services such as DHCP. A physical network comprises physical wires, switches, and routers. A virtual network extends a physical network into a hypervisor or container platform, bridging VMs or containers into the physical network. An OVN logical network is a network implemented in software that is insulated from physical networks by tunnels or other encapsulations. This allows IP and other address spaces used in logical networks to overlap with those used on physical networks without causing conflicts. Logical network topologies can be arranged without regard for the topologies of the physical networks on which they run. Thus, VMs that are part of a logical network can migrate from one physical machine to another without network disruption. The encapsulation layer prevents VMs and containers connected to a logical network from communicating with nodes on physical networks. For clustering VMs and containers, this can be acceptable or even desirable, but in many cases VMs and containers do need connectivity to physical networks. OVN provides multiple forms of gateways for this purpose. An OVN deployment consists of several components: Cloud Management System (CMS) integrates OVN into a physical network by managing the OVN logical network elements and connecting the OVN logical network infrastructure to physical network elements. Some examples include OpenStack and OpenShift. OVN databases stores data representing the OVN logical and physical networks. Hypervisors run Open vSwitch and translate the OVN logical network into OpenFlow on a physical or virtual machine. Gateways extends a tunnel-based OVN logical network into a physical network by forwarding packets between tunnels and the physical network infrastructure. 1.8. Modular Layer 2 (ML2) type and mechanism driver compatibility Refer to the following table when planning your Red Hat OpenStack Platform (RHOSP) data networks to determine the network types each Modular Layer 2 (ML2) mechanism driver supports. Table 1.1. Network types supported by ML2 mechanism drivers Mechanism driver Supports these type drivers Flat GRE VLAN VXLAN GENEVE Open Virtual Network (OVN) Yes No Yes Yes [1] Yes Open vSwitch (OVS) Yes Yes Yes Yes No [1] ML2/OVN VXLAN support is limited to 4096 networks and 4096 ports per network. Also, ACLs that rely on the ingress port do not work with ML2/OVN and VXLAN, because the ingress port is not passed. 1.9. Extension drivers for the RHOSP Networking service The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) is extensible. Extensions serve two purposes: they allow the introduction of new features in the API without requiring a version change and they allow the introduction of vendor specific niche functionality. Applications can programmatically list available extensions by performing a GET on the /extensions URI. Note that this is a versioned request; that is, an extension available in one API version might not be available in another. The ML2 plug-in also supports extension drivers that allows other pluggable drivers to extend the core resources implemented in the ML2 plug-in for network objects. Examples of extension drivers include support for QoS, port security, and so on. | [
"parameter_defaults: NeutronMechanismDrivers: ansible,ovn,baremetal"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/networking-overview_rhosp-network |
Chapter 27. Storage [operator.openshift.io/v1] | Chapter 27. Storage [operator.openshift.io/v1] Description Storage provides a means to configure an operator to manage the cluster storage operator. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 27.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 27.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. vsphereStorageDriver string VSphereStorageDriver indicates the storage driver to use on VSphere clusters. Once this field is set to CSIWithMigrationDriver, it can not be changed. If this is empty, the platform will choose a good default, which may change over time without notice. The current default is CSIWithMigrationDriver and may not be changed. DEPRECATED: This field will be removed in a future release. 27.1.2. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 27.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 27.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 27.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 27.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 27.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/storages DELETE : delete collection of Storage GET : list objects of kind Storage POST : create a Storage /apis/operator.openshift.io/v1/storages/{name} DELETE : delete a Storage GET : read the specified Storage PATCH : partially update the specified Storage PUT : replace the specified Storage /apis/operator.openshift.io/v1/storages/{name}/status GET : read status of the specified Storage PATCH : partially update status of the specified Storage PUT : replace status of the specified Storage 27.2.1. /apis/operator.openshift.io/v1/storages HTTP method DELETE Description delete collection of Storage Table 27.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Storage Table 27.2. HTTP responses HTTP code Reponse body 200 - OK StorageList schema 401 - Unauthorized Empty HTTP method POST Description create a Storage Table 27.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.4. Body parameters Parameter Type Description body Storage schema Table 27.5. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 202 - Accepted Storage schema 401 - Unauthorized Empty 27.2.2. /apis/operator.openshift.io/v1/storages/{name} Table 27.6. Global path parameters Parameter Type Description name string name of the Storage HTTP method DELETE Description delete a Storage Table 27.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 27.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Storage Table 27.9. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Storage Table 27.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.11. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Storage Table 27.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.13. Body parameters Parameter Type Description body Storage schema Table 27.14. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 401 - Unauthorized Empty 27.2.3. /apis/operator.openshift.io/v1/storages/{name}/status Table 27.15. Global path parameters Parameter Type Description name string name of the Storage HTTP method GET Description read status of the specified Storage Table 27.16. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Storage Table 27.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.18. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Storage Table 27.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.20. Body parameters Parameter Type Description body Storage schema Table 27.21. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/storage-operator-openshift-io-v1 |
Chapter 13. Configuring Skupper sites using YAML | Chapter 13. Configuring Skupper sites using YAML Using YAML files to configure Skupper allows you to use source control to track and manage Skupper network changes. 13.1. Creating a Skupper site using YAML Using YAML files to create Skupper sites allows you to use source control to track and manage Skupper network changes. Prerequisites Skupper is installed in the cluster or namespace you want to target. You are logged into the cluster. Procedure Create a YAML file to define the site, for example, my-site.yaml : apiVersion: v1 kind: ConfigMap metadata: name: skupper-site data: name: my-site console: "true" console-user: "admin" console-password: "changeme" flow-collector: "true" The YAML creates a site with a console and you can create tokens from this site. To create a site that has no ingress: Apply the YAML file to your cluster: kubectl apply -f ~/my-site.yml Additional resources See the Section 13.3, "Site ConfigMap YAML reference" section for more reference. 13.2. Configuring services using annotations After creating and linking sites, you can use Kubernetes annotations to control which services are available on the service network. 13.2.1. Exposing simple services on a service network using annotations This section provides an alternative to the skupper expose command, allowing you to annotate existing resources to expose simple services on the service network. Prerequisites A site with a service you want to expose Procedure Log into the namespace in your cluster that is configured as a site. Create a deployment, some pods, or a service in one of your sites, for example: USD kubectl create deployment hello-world-backend --image quay.io/skupper/hello-world-backend This step is not Skupper-specific, that is, this process is unchanged from standard processes for your cluster. Annotate the kubernetes resource to create a service that can communicate on the service network, for example: USD kubectl annotate deployment backend "skupper.io/address=backend" "skupper.io/port=8080" "skupper.io/proxy=tcp" The annotations include: skupper.io/proxy - the protocol you want to use, tcp , http or http2 . This is the only annotation that is required. For example, if you annotate a simple deployment named backend with skupper.io/proxy=tcp , the service is exposed as backend and the containerPort value of the deployment is used as the port number. skupper.io/address - the name of the service on the service network. skupper.io/port - one or more ports for the service on the service network. Note When exposing services, rather than other resources like deployments, you can use the skupper.io/target annotation to avoid modifying the original service. For example, if you want to expose the backend service: USD kubectl annotate service backend "skupper.io/address=van-backend" "skupper.io/port=8080" \ "skupper.io/proxy=tcp" "skupper.io/target=backend" This allows you to delete and recreate the backend service without having to apply the annotation again. Check that you have exposed the service: USD skupper service status -v Services exposed through Skupper: ╰─ backend:8080 (tcp) ╰─ Sites: ├─ 4d80f485-52fb-4d84-b10b-326b96e723b2(west) │ policy: disabled ╰─ 316fbe31-299b-490b-9391-7b46507d76f1(east) │ policy: disabled ╰─ Targets: ╰─ backend:8080 name=backend-9d84544df-rbzjx Note The related targets for services are only displayed when the target is available on the current cluster. 13.2.2. Understanding Skupper annotations Annotations allow you to expose services on the service network. This section provides details on the scope of those annotations skupper.io/address The name of the service on the service network. Applies to: Deployments StatefulSets DaemonSets Services skupper.io/port The port for the service on the service network. Applies to: Deployments StatefulSets DaemonSets skupper.io/proxy The protocol you want to use, tcp , http or http2 . Applies to: Deployments StatefulSets DaemonSets Services skupper.io/target The name of the target service you want to expose. Applies to: Services skupper.io/service-labels A comma separated list of label keys and values for the exposed service. You can use this annotation to set up labels for monitoring exposed services. Applies to: Deployments DaemonSets Services 13.3. Site ConfigMap YAML reference Using YAML files to configure Skupper requires that you understand all the fields so that you provision the site you require. The following YAML defines a Skupper site: apiVersion: v1 data: name: my-site console: "true" flow-collector: "true" console-authentication: internal console-user: "username" console-password: "password" cluster-local: "false" edge: "false" service-sync: "true" ingress: "none" kind: ConfigMap metadata: name: skupper-site name Specifies the site name. console Enables the skupper console, defaults to false . Note You must enable console and flow-collector for the console to function. flow-collector Enables the flow collector, defaults to false . console-authentication Specifies the skupper console authentication method. The options are openshift , internal , unsecured . console-user Username for the internal authentication option. console-password Password for the internal authentication option. cluster-local Only accept connections from within the local cluster, defaults to false . edge Specifies whether an edge site is created, defaults to false . service-sync Specifies whether the services are synchronized across the service network, defaults to true . ingress Specifies whether the site supports ingress. If you do not specify a value, the default ingress ('loadbalancer' on Kubernetes, 'route' on OpenShift) is enabled. This allows you to create tokens usable from remote sites. Note All ingress types are supported using the same parameters as the skupper CLI. | [
"apiVersion: v1 kind: ConfigMap metadata: name: skupper-site data: name: my-site console: \"true\" console-user: \"admin\" console-password: \"changeme\" flow-collector: \"true\"",
"apiVersion: v1 kind: ConfigMap metadata: name: skupper-site data: name: my-site ingress: \"none\"",
"apply -f ~/my-site.yml",
"kubectl create deployment hello-world-backend --image quay.io/skupper/hello-world-backend",
"kubectl annotate deployment backend \"skupper.io/address=backend\" \"skupper.io/port=8080\" \"skupper.io/proxy=tcp\"",
"kubectl annotate service backend \"skupper.io/address=van-backend\" \"skupper.io/port=8080\" \"skupper.io/proxy=tcp\" \"skupper.io/target=backend\"",
"skupper service status -v Services exposed through Skupper: ╰─ backend:8080 (tcp) ╰─ Sites: ├─ 4d80f485-52fb-4d84-b10b-326b96e723b2(west) │ policy: disabled ╰─ 316fbe31-299b-490b-9391-7b46507d76f1(east) │ policy: disabled ╰─ Targets: ╰─ backend:8080 name=backend-9d84544df-rbzjx",
"apiVersion: v1 data: name: my-site console: \"true\" flow-collector: \"true\" console-authentication: internal console-user: \"username\" console-password: \"password\" cluster-local: \"false\" edge: \"false\" service-sync: \"true\" ingress: \"none\" kind: ConfigMap metadata: name: skupper-site"
] | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/using_service_interconnect/skupper-declarative |
Chapter 9. File Systems | Chapter 9. File Systems XFS runtime statistics are available per file system in the /sys/fs/ directory The existing XFS global statistics directory has been moved from the /proc/fs/xfs/ directory to the /sys/fs/xfs/ directory while maintaining compatibility with earlier versions with a symbolic link in /proc/fs/xfs/stat . New subdirectories will be created and maintained for statistics per file system in /sys/fs/xfs/ , for example /sys/fs/xfs/sdb7/stats and /sys/fs/xfs/sdb8/stats . Previously, XFS runtime statistics were available only per server. Now, XFS runtime statistics are available per device. (BZ#1269281) A progress indicator has been added to mkfs.gfs2 The mkfs.gfs2 tool now reports its progress when building journals and resource groups. As mkfs.gfs2 can take some time to complete with large or slow devices, it was not previously clear if mkfs.gfs2 was working correctly until a report was printed. A progress bar has been added to mkfs.gfs2 indicate progress. (BZ# 1196321 ) fsck.gfs2 has been enhanced to require considerably less memory on large file systems Prior to this update, the Global File System 2 (GFS2) file system checker, fsck.gfs2, required a large amount of memory to run on large file systems, and running fsck.gfs2 on file systems larger than 100 TB was therefore impractical. With this update, fsck.gfs2 has been enhanced to run in considerably less memory, which allows for better scalability and makes running fsck.gf2 practical to run on much larger file systems. (BZ# 1268045 ) GFS2 has been enhanced to allow better scalability of its glocks In the Global File System 2 (GFS2), opening or creating a large number of files, even if they are closed again, leaves a lot of GFS2 cluster locks (glocks) in slab memory. When the number of glocks was in the millions, GFS2 previously started to slow down, especially with file creates: GFS2 became gradually slower to create files. With this update, the GFS2 has been enhanced to allow better scalability of its glocks, and the GFS2 can now therefore maintain good performance across millions of file creates. (BZ#1172819) xfsprogs rebased to version 4.5.0 The xfsprogs packages have been upgraded to upstream version 4.5.0, which provides a number of bug fixes and enhancements over the version. The Red Hat Enterprise Linux 7.3 kernel RPM requires the upgraded version of xfsprogs because the new default on-disk format requires special handling of log cycle numbers when running the xfs_repair utility. Notable changes include: Metadata cyclic redundancy checks (CRCs) and directory entry file types are now enabled by default. To replicate the older mkfs on-disk format used in earlier versions of Red Hat Enterprise Linux 7, use the -m crc=0 -n ftype=0 options on the mkfs.xfs command line. The GETNEXTQUOTA interface is now implemented in xfs_quota , which allows fast iteration over all on-disk quotas even when the number of entries in the user database is extremely large. Also, note the following differences between upstream and Red Hat Enterprise Linux 7.3: The experimental sparse inode feature is not available. The free inode btree (finobt) feature is disabled by default to ensure compatibility with earlier Red Hat Enterprise Linux 7 kernel versions. (BZ# 1309498 ) The CIFS kernel module rebased to version 6.4 The Common Internet File System (CIFS) has been upgraded to upstream version 6.4, which provides a number of bug fixes and enhancements over the version. Notably: Support for Kerberos authentication has been added. Support for MFSymlink has been added. The mknod and mkfifo named pipes are now allowed. Also, several memory leaks have been identified and fixed. (BZ#1337587) quota now supports suppressing warnings about NFS mount points with unavailable quota RPC service If a user listed disk quotas with the quota tool, and the local system mounted a network file system with an NFS server that did not provide the quota RPC service, the quota tool returned the error while getting quota from server error message. Now, the quota tools can distinguish between unreachable NFS server and a reachable NFS server without the quota RPC service, and no error is reported in the second case. (BZ# 1155584 ) The /proc/ directory now uses the red-black tree implementation to improve the performance Previously, the /proc/ directory entries implementation used a single linked list, which slowed down the manipulation of directories with a large number of entries. With this update, the single linked list implementation has been replaced by a red-black tree implementation, which improves the performance of directory entries manipulation. (BZ#1210350) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/new_features_file_systems |
Chapter 6. Role [authorization.openshift.io/v1] | Chapter 6. Role [authorization.openshift.io/v1] Description Role is a logical grouping of PolicyRules that can be referenced as a unit by RoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required rules 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata rules array Rules holds all the PolicyRules for this Role rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 6.1.1. .rules Description Rules holds all the PolicyRules for this Role Type array 6.1.2. .rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs resources Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If this field is empty, then both kubernetes and origin API groups are assumed. That means that if an action is requested against one of the enumerated resources in either the kubernetes or the origin API group, the request will be allowed attributeRestrictions RawExtension AttributeRestrictions will vary depending on what the Authorizer/AuthorizationAttributeBuilder pair supports. If the Authorizer does not recognize how to handle the AttributeRestrictions, the Authorizer should report an error. nonResourceURLs array (string) NonResourceURLsSlice is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path This name is intentionally different than the internal type so that the DefaultConvert works nicely and because the ordering may be different. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. ResourceAll represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. VerbAll represents all kinds. 6.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/roles GET : list objects of kind Role /apis/authorization.openshift.io/v1/namespaces/{namespace}/roles GET : list objects of kind Role POST : create a Role /apis/authorization.openshift.io/v1/namespaces/{namespace}/roles/{name} DELETE : delete a Role GET : read the specified Role PATCH : partially update the specified Role PUT : replace the specified Role 6.2.1. /apis/authorization.openshift.io/v1/roles HTTP method GET Description list objects of kind Role Table 6.1. HTTP responses HTTP code Reponse body 200 - OK RoleList schema 401 - Unauthorized Empty 6.2.2. /apis/authorization.openshift.io/v1/namespaces/{namespace}/roles HTTP method GET Description list objects of kind Role Table 6.2. HTTP responses HTTP code Reponse body 200 - OK RoleList schema 401 - Unauthorized Empty HTTP method POST Description create a Role Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.4. Body parameters Parameter Type Description body Role schema Table 6.5. HTTP responses HTTP code Reponse body 200 - OK Role schema 201 - Created Role schema 202 - Accepted Role schema 401 - Unauthorized Empty 6.2.3. /apis/authorization.openshift.io/v1/namespaces/{namespace}/roles/{name} Table 6.6. Global path parameters Parameter Type Description name string name of the Role HTTP method DELETE Description delete a Role Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Role Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Role schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Role Table 6.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK Role schema 201 - Created Role schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Role Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body Role schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK Role schema 201 - Created Role schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/role_apis/role-authorization-openshift-io-v1 |
5.6. Other ext4 File System Utilities | 5.6. Other ext4 File System Utilities Red Hat Enterprise Linux 7 also features other utilities for managing ext4 file systems: e2fsck Used to repair an ext4 file system. This tool checks and repairs an ext4 file system more efficiently than ext3, thanks to updates in the ext4 disk structure. e2label Changes the label on an ext4 file system. This tool also works on ext2 and ext3 file systems. quota Controls and reports on disk space (blocks) and file (inode) usage by users and groups on an ext4 file system. For more information on using quota , refer to man quota and Section 17.1, "Configuring Disk Quotas" . fsfreeze To suspend access to a file system, use the command # fsfreeze -f mount-point to freeze it and # fsfreeze -u mount-point to unfreeze it. This halts access to the file system and creates a stable image on disk. Note It is unnecessary to use fsfreeze for device-mapper drives. For more information see the fsfreeze(8) manpage. As demonstrated in Section 5.2, "Mounting an ext4 File System" , the tune2fs utility can also adjust configurable file system parameters for ext2, ext3, and ext4 file systems. In addition, the following tools are also useful in debugging and analyzing ext4 file systems: debugfs Debugs ext2, ext3, or ext4 file systems. e2image Saves critical ext2, ext3, or ext4 file system metadata to a file. For more information about these utilities, refer to their respective man pages. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ext4others |
11.5. Configuring Rules with pcs | 11.5. Configuring Rules with pcs To configure a rule using pcs , you can configure a location constraint that uses rules, as described in Section 7.1.3, "Using Rules to Determine Resource Location" . To remove a rule, use the following. If the rule that you are removing is the last rule in its constraint, the constraint will be removed. | [
"pcs constraint rule remove rule_id"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/configuring_rules |
Chapter 316. SpEL Language | Chapter 316. SpEL Language Available as of Camel version 2.7 Camel allows Spring Expression Language (SpEL) to be used as an Expression or Predicate in the DSL or XML Configuration. Note It is recommended to use SpEL in Spring runtimes. However from Camel 2.21 onwards you can use SpEL in other runtimes (there may be functionality SpEL cannot do when not running in a Spring runtime) 316.1. Variables The following variables are available in expressions and predicates written in SpEL: Variable Type Description this Exchange the Exchange is the root object exchange Exchange the Exchange object exception Throwable the Exchange exception (if any) exchangeId String the exchange id fault Message the Fault message (if any) body Object The IN message body. request Message the exchange.in message response Message the exchange.out message (if any) properties Map the exchange properties property(name) Object the property by the given name property(name, type) Type the property by the given name as the given type 316.2. Options The SpEL language supports 1 options, which are listed below. Name Default Java Type Description trim true Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks 316.3. Samples 316.3.1. Expression templating SpEL expressions need to be surrounded by #{ } delimiters since expression templating is enabled. This allows you to combine SpEL expressions with regular text and use this as extremely lightweight template language. For example if you construct the following route: from("direct:example") .setBody(spel("Hello #{request.body}! What a beautiful #{request.headers['dayOrNight']}")) .to("mock:result"); In the route above, notice spel is a static method which we need to import from org.apache.camel.language.spel.SpelExpression.spel , as we use spel as an Expression passed in as a parameter to the setBody method. Though if we use the fluent API we can do this instead: from("direct:example") .setBody().spel("Hello #{request.body}! What a beautiful #{request.headers['dayOrNight']}") .to("mock:result"); Notice we now use the spel method from the setBody() method. And this does not require us to static import the spel method from org.apache.camel.language.spel.SpelExpression.spel . And sent a message with the string "World" in the body, and a header "dayOrNight" with value "day": template.sendBodyAndHeader("direct:example", "World", "dayOrNight", "day"); The output on mock:result will be "Hello World! What a beautiful day" 316.3.2. Bean integration You can reference beans defined in the Registry (most likely an ApplicationContext ) in your SpEL expressions. For example if you have a bean named "foo" in your ApplicationContext you can invoke the "bar" method on this bean like this: #{@foo.bar == 'xyz'} 316.3.3. SpEL in enterprise integration patterns You can use SpEL as an expression for Recipient List or as a predicate inside a Message Filter : <route> <from uri="direct:foo"/> <filter> <spel>#{request.headers['foo'] == 'bar'}</spel> <to uri="direct:bar"/> </filter> </route> And the equivalent in Java DSL: from("direct:foo") .filter().spel("#{request.headers['foo'] == 'bar'}") .to("direct:bar"); 316.4. Loading script from external resource Available as of Camel 2.11 You can externalize the script and have Camel load it from a resource such as "classpath:" , "file:" , or "http:" . This is done using the following syntax: "resource:scheme:location" , eg to refer to a file on the classpath you can do: .setHeader("myHeader").spel("resource:classpath:myspel.txt") | [
"from(\"direct:example\") .setBody(spel(\"Hello #{request.body}! What a beautiful #{request.headers['dayOrNight']}\")) .to(\"mock:result\");",
"from(\"direct:example\") .setBody().spel(\"Hello #{request.body}! What a beautiful #{request.headers['dayOrNight']}\") .to(\"mock:result\");",
"template.sendBodyAndHeader(\"direct:example\", \"World\", \"dayOrNight\", \"day\");",
"#{@foo.bar == 'xyz'}",
"<route> <from uri=\"direct:foo\"/> <filter> <spel>#{request.headers['foo'] == 'bar'}</spel> <to uri=\"direct:bar\"/> </filter> </route>",
"from(\"direct:foo\") .filter().spel(\"#{request.headers['foo'] == 'bar'}\") .to(\"direct:bar\");",
".setHeader(\"myHeader\").spel(\"resource:classpath:myspel.txt\")"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/spel-language |
Chapter 6. Storage classes and storage pools | Chapter 6. Storage classes and storage pools The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create a custom storage class if you want the storage class to have a different behavior. You can create multiple storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple storage classes and multiple pools are not supported for external mode OpenShift Data Foundation clusters. Note With a minimal cluster of a single device set, only two new storage classes can be created. Every storage cluster expansion allows two new additional storage classes. 6.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Choose a Storage system for your workloads. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 6.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager (For users on Azure platform only) Using Azure Vault: Ensure to set up client authentication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault . Procedure In the OpenShift Web Console, navigate to Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. Choose one of the following options to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select one of the following Key Management Service Provider and provide the required details. Vault Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault (Only for Azure users on Azure platform) For information about setting up client authentication and fetching the client credentials, see the Prerequisites in Creating an OpenShift Data Foundation cluster section of the Deploying OpenShift Data Foundation using Microsoft Azure guide. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage Storage Classes . Click the Storage class name YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . | [
"encryptionKMSID: 1-vault",
"kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/storage-classes-and-storage-pools_osp |
Subsets and Splits