title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Using Octavia for Load Balancing-as-a-Service | Using Octavia for Load Balancing-as-a-Service Red Hat OpenStack Platform 17.0 Octavia administration and how to use octavia to load balance network traffic across the data plane. OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_octavia_for_load_balancing-as-a-service/index |
3.10. ns | 3.10. ns The ns subsystem provides a way to group processes into separate namespaces . Within a particular namespace, processes can interact with each other but are isolated from processes running in other namespaces. These separate namespaces are sometimes referred to as containers when used for operating-system-level virtualization. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-ns |
30.5. Modifying sudo Commands and Command Groups | 30.5. Modifying sudo Commands and Command Groups Modifying sudo Commands and Command Groups in the Web UI Under the Policy tab, click Sudo Sudo Commands or Sudo Sudo Command Groups . Click the name of the command or command group to display its configuration page. Change the settings as required. On some configuration pages, the Save button is available at the top of the page. On these pages, you must click the button to confirm the changes. Modifying sudo Commands and Command Groups from the Command Line To modify a command or command group, use the following commands: ipa sudocmd-mod ipa sudocmdgroup-mod Add command-line options to the above-mentioned commands to update the sudo command or command group attributes. For example, to add a new description for the /usr/bin/less command: For more information about these commands and the options they accept, run them with the --help option added. | [
"ipa sudocmd-mod /usr/bin/less --desc=\"For reading log files\" ------------------------------------- Modified Sudo Command \"/usr/bin/less\" ------------------------------------- Sudo Command: /usr/bin/less Description: For reading log files Sudo Command Groups: files"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/modify-sudo-cmd-cmdgroup |
4.9. Emerson Network Power Switch (SNMP interface) | 4.9. Emerson Network Power Switch (SNMP interface) Table 4.10, "Emerson Network Power Switch (SNMP interface) (Red Hat Enterprise Linux 6.7 and later)" lists the fence device parameters used by fence_emerson , the fence agent for Emerson over SNMP. Table 4.10. Emerson Network Power Switch (SNMP interface) (Red Hat Enterprise Linux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the Emerson Network Power Switch device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport UDP/TCP port to use for connections with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP privacy protocol password snmp_priv_passwd The SNMP Privacy Protocol Password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-emerson-ca |
Chapter 1. About hardware accelerators | Chapter 1. About hardware accelerators Specialized hardware accelerators play a key role in the emerging generative artificial intelligence and machine learning (AI/ML) industry. Specifically, hardware accelerators are essential to the training and serving of large language and other foundational models that power this new technology. Data scientists, data engineers, ML engineers, and developers can take advantage of the specialized hardware acceleration for data-intensive transformations and model development and serving. Much of that ecosystem is open source, with a number of contributing partners and open source foundations. Red Hat OpenShift Container Platform provides support for cards and peripheral hardware that add processing units that comprise hardware accelerators: Graphical processing units (GPUs) Neural processing units (NPUs) Application-specific integrated circuits (ASICs) Data processing units (DPUs) Specialized hardware accelerators provide a rich set of benefits for AI/ML development: One platform for all A collaborative environment for developers, data engineers, data scientists, and DevOps Extended capabilities with Operators Operators allow for bringing AI/ML capabilities to OpenShift Container Platform Hybrid-cloud support On-premise support for model development, delivery, and deployment Support for AI/ML workloads Model testing, iteration, integration, promotion, and serving into production as services Red Hat provides an optimized platform to enable these specialized hardware accelerators in Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform platforms at the Linux (kernel and userspace) and Kubernetes layers. To do this, Red Hat combines the proven capabilities of Red Hat OpenShift AI and Red Hat OpenShift Container Platform in a single enterprise-ready AI application platform. Hardware Operators use the operating framework of a Kubernetes cluster to enable the required accelerator resources. You can also deploy the provided device plugin manually or as a daemon set. This plugin registers the GPU in the cluster. Certain specialized hardware accelerators are designed to work within disconnected environments where a secure environment must be maintained for development and testing. 1.1. Hardware accelerators Red Hat OpenShift Container Platform enables the following hardware accelerators: NVIDIA GPU AMD Instinct(R) GPU Intel(R) Gaudi(R) Additional resources Introduction to Red Hat OpenShift AI NVIDIA GPU Operator on Red Hat OpenShift Container Platform AMD Instinct Accelerators Intel Gaudi Al Accelerators | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hardware_accelerators/about-hardware-accelerators |
Managing Red Hat Certified and Ansible Galaxy collections in automation hub | Managing Red Hat Certified and Ansible Galaxy collections in automation hub Red Hat Ansible Automation Platform 2.3 Configure automation hub to deliver curated Red Hat Certified and Ansible Galaxy collections to your users. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/managing_red_hat_certified_and_ansible_galaxy_collections_in_automation_hub/index |
Chapter 2. Browsing with the API | Chapter 2. Browsing with the API REST APIs give access to resources (data entities) through URI paths. Procedure Go to the automation controller REST API in a web browser at: https://<server name>/api/controller/v2 Click the "v2" link to "current versions" or "available versions" . Automation controller supports version 2 of the API. Perform a GET with just the /api/ endpoint to get the current_version , which is the recommended version. Click the icon on the navigation menu, for documentation on the access methods for that particular API endpoint and what data is returned when using those methods. Use the PUT and POST verbs on the specific API pages by formatting JSON in the various text fields. You can also view changed settings from factory defaults at /api/v2/settings/changed/ endpoint. It reflects changes you made in the API browser, not changed settings that come from static settings files. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_api_overview/controller-api-browsing-api |
5.341. upstart | 5.341. upstart 5.341.1. RHBA-2012:0863 - upstart bug fix and enhancement update An updated upstart package that fixes two bug and adds two enhancements is now available for Red Hat Enterprise Linux 6. The upstart package contains an event-based replacement for the /sbin/init daemon that starts tasks and services during boot, stops them during shut down, and supervises them while the system is running. Bug Fixes BZ# 771736 Previously, the PACKAGE_BUGREPORT variable pointed to a Ubuntu mailing list. The mailing list was therefore presented in multiple manual pages, which was unwanted. With this update, the value of the PACKAGE_BUGREPORT variable has been modified to "https://launchpad.net/upstart/+bugs", and users are now directed to that website rather than to the Ubuntu mailing list. BZ# 798551 versions of upstart did not mount the proc and sys file systems. This was ensured by initscripts, which could, under certain circumstances, lead to race condition problems. With this update, upstart is used to mount the proc and sys file systems before launching anything else. Enhancements BZ# 663594 Files with the ".conf" suffix located in the /etc/init/ directory are not considered as configuration files. As a consequence, such files are not protected during a package update and can be overwritten by new files. This update adds support for "override" files that contain user-specified settings. Now, it is possible to alter parameters provided by the aforementioned ".conf" files by creating a corresponding file with the ".override" suffix. BZ# 735427 Previously, the initctl scripts returned error messages that did not tell users how to run the particular command correctly to get the required output. This update adds a new stanza, "usage", that can be used to provide users with detailed information on how to run the particular command correctly if the input has been incorrect. All users of upstart are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/upstart |
Chapter 2. Setting up Maven locally | Chapter 2. Setting up Maven locally Typical Fuse application development uses Maven to build and manage projects. The following topics describe how to set up Maven locally: Section 2.1, "Preparing to set up Maven" Section 2.2, "Adding Red Hat repositories to Maven" Section 2.3, "Using local Maven repositories" Section 2.4, "Setting Maven mirror using environmental variables or system properties" Section 2.5, "About Maven artifacts and coordinates" 2.1. Preparing to set up Maven Maven is a free, open source, build tool from Apache. Typically, you use Maven to build Fuse applications. Procedure Download the latest version of Maven from the Maven download page . Ensure that your system is connected to the Internet. While building a project, the default behavior is that Maven searches external repositories and downloads the required artifacts. Maven looks for repositories that are accessible over the Internet. You can change this behavior so that Maven searches only repositories that are on a local network. That is, Maven can run in an offline mode. In offline mode, Maven looks for artifacts in its local repository. See Section 2.3, "Using local Maven repositories" . 2.2. Adding Red Hat repositories to Maven To access artifacts that are in Red Hat Maven repositories, you need to add those repositories to Maven's settings.xml file. Maven looks for the settings.xml file in the .m2 directory of the user's home directory. If there is not a user specified settings.xml file, Maven uses the system-level settings.xml file at M2_HOME/conf/settings.xml . Prerequisite You know the location of the settings.xml file in which you want to add the Red Hat repositories. Procedure In the settings.xml file, add repository elements for the Red Hat repositories as shown in this example: <?xml version="1.0"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings> 2.3. Using local Maven repositories If you are running a container without an Internet connection, and you need to deploy an application that has dependencies that are not available offline, you can use the Maven dependency plug-in to download the application's dependencies into a Maven offline repository. You can then distribute this customized Maven offline repository to machines that do not have an Internet connection. Procedure In the project directory that contains the pom.xml file, download a repository for a Maven project by running a command such as the following: In this example, Maven dependencies and plug-ins that are required to build the project are downloaded to the /tmp/my-project directory. Distribute this customized Maven offline repository internally to any machines that do not have an Internet connection. 2.4. Setting Maven mirror using environmental variables or system properties When running the applications you need access to the artifacts that are in the Red Hat Maven repositories. These repositories are added to Maven's settings.xml file. Maven checks the following locations for settings.xml file: looks for the specified url if not found looks for USD{user.home}/.m2/settings.xml if not found looks for USD{maven.home}/conf/settings.xml if not found looks for USD{M2_HOME}/conf/settings.xml if no location is found, empty org.apache.maven.settings.Settings instance is created. 2.4.1. About Maven mirror Maven uses a set of remote repositories to access the artifacts, which are currently not available in local repository. The list of repositories almost always contains Maven Central repository, but for Red Hat Fuse, it also contains Maven Red Hat repositories. In some cases where it is not possible or allowed to access different remote repositories, you can use a mechanism of Maven mirrors. A mirror replaces a particular repository URL with a different one, so all HTTP traffic when remote artifacts are being searched for can be directed to a single URL. 2.4.2. Adding Maven mirror to settings.xml To set the Maven mirror, add the following section to Maven's settings.xml : No mirror is used if the above section is not found in the settings.xml file. To specify a global mirror without providing the XML configuration, you can use either system property or environmental variables. 2.4.3. Setting Maven mirror using environmental variable or system property To set the Maven mirror using either environmental variable or system property, you can add: Environmental variable called MAVEN_MIRROR_URL to bin/setenv file System property called mavenMirrorUrl to etc/system.properties file 2.4.4. Using Maven options to specify Maven mirror url To use an alternate Maven mirror url, other than the one specified by environmental variables or system property, use the following maven options when running the application: -DmavenMirrorUrl=mirrorId::mirrorUrl for example, -DmavenMirrorUrl=my-mirror::http://mirror.net/repository -DmavenMirrorUrl=mirrorUrl for example, -DmavenMirrorUrl=http://mirror.net/repository . In this example, the <id> of the <mirror> is just a mirror. 2.5. About Maven artifacts and coordinates In the Maven build system, the basic building block is an artifact . After a build, the output of an artifact is typically an archive, such as a JAR or WAR file. A key aspect of Maven is the ability to locate artifacts and manage the dependencies between them. A Maven coordinate is a set of values that identifies the location of a particular artifact. A basic coordinate has three values in the following form: groupId:artifactId:version Sometimes Maven augments a basic coordinate with a packaging value or with both a packaging value and a classifier value. A Maven coordinate can have any one of the following forms: Here are descriptions of the values: groupdId Defines a scope for the name of the artifact. You would typically use all or part of a package name as a group ID. For example, org.fusesource.example . artifactId Defines the artifact name relative to the group ID. version Specifies the artifact's version. A version number can have up to four parts: n.n.n.n , where the last part of the version number can contain non-numeric characters. For example, the last part of 1.0-SNAPSHOT is the alphanumeric substring, 0-SNAPSHOT . packaging Defines the packaged entity that is produced when you build the project. For OSGi projects, the packaging is bundle . The default value is jar . classifier Enables you to distinguish between artifacts that were built from the same POM, but have different content. Elements in an artifact's POM file define the artifact's group ID, artifact ID, packaging, and version, as shown here: <project ... > ... <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> ... </project> To define a dependency on the preceding artifact, you would add the following dependency element to a POM file: <project ... > ... <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> ... </project> Note It is not necessary to specify the bundle package type in the preceding dependency, because a bundle is just a particular kind of JAR file and jar is the default Maven package type. If you do need to specify the packaging type explicitly in a dependency, however, you can use the type element. | [
"<?xml version=\"1.0\"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings>",
"mvn org.apache.maven.plugins:maven-dependency-plugin:3.1.0:go-offline -Dmaven.repo.local=/tmp/my-project",
"<mirror> <id>all</id> <mirrorOf>*</mirrorOf> <url>http://host:port/path</url> </mirror>",
"groupId:artifactId:version groupId:artifactId:packaging:version groupId:artifactId:packaging:classifier:version",
"<project ... > <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> </project>",
"<project ... > <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/getting_started_with_fuse_on_apache_karaf/set-up-maven-locally |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.422_release_notes/providing-direct-documentation-feedback_openjdk |
Chapter 5. Override Ceph behavior | Chapter 5. Override Ceph behavior As a storage administrator, you need to understand how to use overrides for the Red Hat Ceph Storage cluster to change Ceph options during runtime. 5.1. Setting and unsetting Ceph override options You can set and unset Ceph options to override Ceph's default behavior. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To override Ceph's default behavior, use the ceph osd set command and the behavior you wish to override: Syntax Once you set the behavior, ceph health will reflect the override(s) that you have set for the cluster. Example To cease overriding Ceph's default behavior, use the ceph osd unset command and the override you wish to cease. Syntax Example Flag Description noin Prevents OSDs from being treated as in the cluster. noout Prevents OSDs from being treated as out of the cluster. noup Prevents OSDs from being treated as up and running. nodown Prevents OSDs from being treated as down . full Makes a cluster appear to have reached its full_ratio , and thereby prevents write operations. pause Ceph will stop processing read and write operations, but will not affect OSD in , out , up or down statuses. nobackfill Ceph will prevent new backfill operations. norebalance Ceph will prevent new rebalancing operations. norecover Ceph will prevent new recovery operations. noscrub Ceph will prevent new scrubbing operations. nodeep-scrub Ceph will prevent new deep scrubbing operations. notieragent Ceph will disable the process that is looking for cold/dirty objects to flush and evict. 5.2. Ceph override use cases noin : Commonly used with noout to address flapping OSDs. noout : If the mon osd report timeout is exceeded and an OSD has not reported to the monitor, the OSD will get marked out . If this happens erroneously, you can set noout to prevent the OSD(s) from getting marked out while you troubleshoot the issue. noup : Commonly used with nodown to address flapping OSDs. nodown : Networking issues may interrupt Ceph 'heartbeat' processes, and an OSD may be up but still get marked down. You can set nodown to prevent OSDs from getting marked down while troubleshooting the issue. full : If a cluster is reaching its full_ratio , you can pre-emptively set the cluster to full and expand capacity. Note Setting the cluster to full will prevent write operations. pause : If you need to troubleshoot a running Ceph cluster without clients reading and writing data, you can set the cluster to pause to prevent client operations. nobackfill : If you need to take an OSD or node down temporarily, for example, upgrading daemons, you can set nobackfill so that Ceph will not backfill while the OSDs is down . norecover : If you need to replace an OSD disk and don't want the PGs to recover to another OSD while you are hotswapping disks, you can set norecover to prevent the other OSDs from copying a new set of PGs to other OSDs. noscrub and nodeep-scrubb : If you want to prevent scrubbing for example, to reduce overhead during high loads, recovery, backfilling, and rebalancing you can set noscrub and/or nodeep-scrub to prevent the cluster from scrubbing OSDs. notieragent : If you want to stop the tier agent process from finding cold objects to flush to the backing storage tier, you may set notieragent . | [
"ceph osd set FLAG",
"ceph osd set noout",
"ceph osd unset FLAG",
"ceph osd unset noout"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/administration_guide/override-ceph-behavior |
Chapter 5. Scaling a user-provisioned cluster with the Bare Metal Operator | Chapter 5. Scaling a user-provisioned cluster with the Bare Metal Operator After deploying a user-provisioned infrastructure cluster, you can use the Bare Metal Operator (BMO) and other metal 3 components to scale bare-metal hosts in the cluster. This approach helps you to scale a user-provisioned cluster in a more automated way. 5.1. About scaling a user-provisioned cluster with the Bare Metal Operator You can scale user-provisioned infrastructure clusters by using the Bare Metal Operator (BMO) and other metal 3 components. User-provisioned infrastructure installations do not feature the Machine API Operator. The Machine API Operator typically manages the lifecycle of bare-metal nodes in a cluster. However, it is possible to use the BMO and other metal 3 components to scale nodes in user-provisioned clusters without requiring the Machine API Operator. 5.1.1. Prerequisites for scaling a user-provisioned cluster You installed a user-provisioned infrastructure cluster on bare metal. You have baseboard management controller (BMC) access to the hosts. 5.1.2. Limitations for scaling a user-provisioned cluster You cannot use a provisioning network to scale user-provisioned infrastructure clusters by using the Bare Metal Operator (BMO). Consequentially, you can only use bare-metal host drivers that support virtual media networking booting, for example redfish-virtualmedia and idrac-virtualmedia . You cannot scale MachineSet objects in user-provisioned infrastructure clusters by using the BMO. 5.2. Configuring a provisioning resource to scale user-provisioned clusters Create a Provisioning custom resource (CR) to enable Metal platform components on a user-provisioned infrastructure cluster. Prerequisites You installed a user-provisioned infrastructure cluster on bare metal. Procedure Create a Provisioning CR. Save the following YAML in the provisioning.yaml file: apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: "Disabled" watchAllNamespaces: false Note OpenShift Container Platform 4.14 does not support enabling a provisioning network when you scale a user-provisioned cluster by using the Bare Metal Operator. Create the Provisioning CR by running the following command: USD oc create -f provisioning.yaml Example output provisioning.metal3.io/provisioning-configuration created Verification Verify that the provisioning service is running by running the following command: USD oc get pods -n openshift-machine-api Example output NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-678c476f4c-jjdn5 2/2 Running 0 5d21h cluster-baremetal-operator-6866f7b976-gmvgh 2/2 Running 0 5d21h control-plane-machine-set-operator-7d8566696c-bh4jz 1/1 Running 0 5d21h ironic-proxy-64bdw 1/1 Running 0 5d21h ironic-proxy-rbggf 1/1 Running 0 5d21h ironic-proxy-vj54c 1/1 Running 0 5d21h machine-api-controllers-544d6849d5-tgj9l 7/7 Running 1 (5d21h ago) 5d21h machine-api-operator-5c4ff4b86d-6fjmq 2/2 Running 0 5d21h metal3-6d98f84cc8-zn2mx 5/5 Running 0 5d21h metal3-image-customization-59d745768d-bhrp7 1/1 Running 0 5d21h 5.3. Provisioning new hosts in a user-provisioned cluster by using the BMO You can use the Bare Metal Operator (BMO) to provision bare-metal hosts in a user-provisioned cluster by creating a BareMetalHost custom resource (CR). Note Provisioning bare-metal hosts to the cluster by using the BMO sets the spec.externallyProvisioned specification in the BareMetalHost custom resource to false by default. Do not set the spec.externallyProvisioned specification to true , because this setting results in unexpected behavior. Prerequisites You created a user-provisioned bare-metal cluster. You have baseboard management controller (BMC) access to the hosts. You deployed a provisioning service in the cluster by creating a Provisioning CR. Procedure Create a configuration file for the bare-metal node. Depending if you use either a static configuration or a DHCP server, choose one of the following example bmh.yaml files and configure it to your needs by replacing values in the YAML to match your environment: To deploy with a static configuration, create the following bmh.yaml file: --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 1 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 2 interfaces: 3 - name: <nic1_name> 4 type: ethernet state: up ipv4: address: - ip: <ip_address> 5 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 6 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 7 -hop-interface: <next_hop_nic1_name> 8 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 9 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 10 bmc: address: <protocol>://<bmc_url> 11 credentialsName: openshift-worker-<num>-bmc-secret disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 12 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 1 Replace all instances of <num> with a unique compute node number for the bare-metal nodes in the name , credentialsName , and preprovisioningNetworkDataName fields. 2 Add the NMState YAML syntax to configure the host interfaces. To configure the network interface for a newly created node, specify the name of the secret that has the network configuration. Follow the nmstate syntax to define the network configuration for your node. See "Preparing the bare-metal node" for details on configuring NMState syntax. 3 Optional: If you have configured the network interface with nmstate , and you want to disable an interface, set state: up with the IP addresses set to enabled: false . 4 Replace <nic1_name> with the name of the bare-metal node's first network interface controller (NIC). 5 Replace <ip_address> with the IP address of the bare-metal node's NIC. 6 Replace <dns_ip_address> with the IP address of the bare-metal node's DNS resolver. 7 Replace <next_hop_ip_address> with the IP address of the bare-metal node's external gateway. 8 Replace <next_hop_nic1_name> with the name of the bare-metal node's external gateway. 9 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 10 Replace <nic1_mac_address> with the MAC address of the bare-metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 11 Replace <protocol> with the BMC protocol, such as IPMI, Redfish, or others. Replace <bmc_url> with the URL of the bare-metal node's baseboard management controller. 12 Optional: Replace <root_device_hint> with a device path when specifying a root device hint. See "Root device hints" for additional details. When configuring the network interface with a static configuration by using nmstate , set state: up with the IP addresses set to enabled: false : --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret namespace: openshift-machine-api # ... interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false # ... To deploy with a DHCP configuration, create the following bmh.yaml file: --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 3 bmc: address: <protocol>://<bmc_url> 4 credentialsName: openshift-worker-<num>-bmc disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 5 1 Replace <num> with a unique compute node number for the bare-metal nodes in the name and credentialsName fields. 2 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 3 Replace <nic1_mac_address> with the MAC address of the bare-metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 4 Replace <protocol> with the BMC protocol, such as IPMI, Redfish, or others. Replace <bmc_url> with the URL of the bare-metal node's baseboard management controller. 5 Optional: Replace <root_device_hint> with a device path when specifying a root device hint. See "Root device hints" for additional details. Important If the MAC address of an existing bare-metal node matches the MAC address of the bare-metal host that you are attempting to provision, then the installation will fail. If the host enrollment, inspection, cleaning, or other steps fail, the Bare Metal Operator retries the installation continuously. See "Diagnosing a duplicate MAC address when provisioning a new host in the cluster" for additional details. Create the bare-metal node by running the following command: USD oc create -f bmh.yaml Example output secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created Inspect the bare-metal node by running the following command: USD oc -n openshift-machine-api get bmh openshift-worker-<num> where: <num> Specifies the compute node number. Example output NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned true Approve all certificate signing requests (CSRs). Get the list of pending CSRs by running the following command: USD oc get csr Example output NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-gfm9f 33s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-o perator:node-bootstrapper <none> Pending Approve the CSR by running the following command: USD oc adm certificate approve <csr_name> Example output certificatesigningrequest.certificates.k8s.io/<csr_name> approved Verification Verify that the node is ready by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION app1 Ready worker 47s v1.24.0+dc5a2fd controller1 Ready master,worker 2d22h v1.24.0+dc5a2fd Additional resources Preparing the bare-metal node Root device hints Diagnosing a duplicate MAC address when provisioning a new host in the cluster 5.4. Optional: Managing existing hosts in a user-provisioned cluster by using the BMO Optionally, you can use the Bare Metal Operator (BMO) to manage existing bare-metal controller hosts in a user-provisioned cluster by creating a BareMetalHost object for the existing host. It is not a requirement to manage existing user-provisioned hosts; however, you can enroll them as externally-provisioned hosts for inventory purposes. Important To manage existing hosts by using the BMO, you must set the spec.externallyProvisioned specification in the BareMetalHost custom resource to true to prevent the BMO from re-provisioning the host. Prerequisites You created a user-provisioned bare-metal cluster. You have baseboard management controller (BMC) access to the hosts. You deployed a provisioning service in the cluster by creating a Provisioning CR. Procedure Create the Secret CR and the BareMetalHost CR. Save the following YAML in the controller.yaml file: --- apiVersion: v1 kind: Secret metadata: name: controller1-bmc namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: controller1 namespace: openshift-machine-api spec: bmc: address: <protocol>://<bmc_url> 1 credentialsName: "controller1-bmc" bootMACAddress: <nic1_mac_address> customDeploy: method: install_coreos externallyProvisioned: true 2 online: true userData: name: controller-user-data-managed namespace: openshift-machine-api 1 You can only use bare-metal host drivers that support virtual media networking booting, for example redfish-virtualmedia and idrac-virtualmedia . 2 You must set the value to true to prevent the BMO from re-provisioning the bare-metal controller host. Create the bare-metal host object by running the following command: USD oc create -f controller.yaml Example output secret/controller1-bmc created baremetalhost.metal3.io/controller1 created Verification Verify that the BMO created the bare-metal host object by running the following command: USD oc get bmh -A Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 13s 5.5. Removing hosts from a user-provisioned cluster by using the BMO You can use the Bare Metal Operator (BMO) to remove bare-metal hosts from a user-provisioned cluster. Prerequisites You created a user-provisioned bare-metal cluster. You have baseboard management controller (BMC) access to the hosts. You deployed a provisioning service in the cluster by creating a Provisioning CR. Procedure Cordon and drain the node by running the following command: USD oc adm drain app1 --force --ignore-daemonsets=true Example output node/app1 cordoned WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-tvthg, openshift-dns/dns- default-9q6rz, openshift-dns/node-resolver-zvt42, openshift-image-registry/node-ca-mzxth, openshift-ingress-cana ry/ingress-canary-qq5lf, openshift-machine-config-operator/machine-config-daemon-v79dm, openshift-monitoring/nod e-exporter-2vn59, openshift-multus/multus-additional-cni-plugins-wssvj, openshift-multus/multus-fn8tg, openshift -multus/network-metrics-daemon-5qv55, openshift-network-diagnostics/network-check-target-jqxn2, openshift-ovn-ku bernetes/ovnkube-node-rsvqg evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766965-258vp evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766950-kg5mk evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766935-stf4s pod/collect-profiles-27766965-258vp evicted pod/collect-profiles-27766950-kg5mk evicted pod/collect-profiles-27766935-stf4s evicted node/app1 drained Delete the customDeploy specification from the BareMetalHost CR. Edit the BareMetalHost CR for the host by running the following command: USD oc edit bmh -n openshift-machine-api <host_name> Delete the lines spec.customDeploy and spec.customDeploy.method : ... customDeploy: method: install_coreos Verify that the provisioning state of the host changes to deprovisioning by running the following command: USD oc get bmh -A Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 58m openshift-machine-api worker1 deprovisioning true 57m Delete the host by running the following command when the BareMetalHost state changes to available : USD oc delete bmh -n openshift-machine-api <bmh_name> Note You can run this step without having to edit the BareMetalHost CR. It might take some time for the BareMetalHost state to change from deprovisioning to available . Delete the node by running the following command: USD oc delete node <node_name> Verification Verify that you deleted the node by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION controller1 Ready master,worker 2d23h v1.24.0+dc5a2fd | [
"apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: \"Disabled\" watchAllNamespaces: false",
"oc create -f provisioning.yaml",
"provisioning.metal3.io/provisioning-configuration created",
"oc get pods -n openshift-machine-api",
"NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-678c476f4c-jjdn5 2/2 Running 0 5d21h cluster-baremetal-operator-6866f7b976-gmvgh 2/2 Running 0 5d21h control-plane-machine-set-operator-7d8566696c-bh4jz 1/1 Running 0 5d21h ironic-proxy-64bdw 1/1 Running 0 5d21h ironic-proxy-rbggf 1/1 Running 0 5d21h ironic-proxy-vj54c 1/1 Running 0 5d21h machine-api-controllers-544d6849d5-tgj9l 7/7 Running 1 (5d21h ago) 5d21h machine-api-operator-5c4ff4b86d-6fjmq 2/2 Running 0 5d21h metal3-6d98f84cc8-zn2mx 5/5 Running 0 5d21h metal3-image-customization-59d745768d-bhrp7 1/1 Running 0 5d21h",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 1 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 2 interfaces: 3 - name: <nic1_name> 4 type: ethernet state: up ipv4: address: - ip: <ip_address> 5 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 6 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 7 next-hop-interface: <next_hop_nic1_name> 8 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 9 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 10 bmc: address: <protocol>://<bmc_url> 11 credentialsName: openshift-worker-<num>-bmc-secret disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 12 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret namespace: openshift-machine-api # interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 3 bmc: address: <protocol>://<bmc_url> 4 credentialsName: openshift-worker-<num>-bmc disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 5",
"oc create -f bmh.yaml",
"secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned true",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-gfm9f 33s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-o perator:node-bootstrapper <none> Pending",
"oc adm certificate approve <csr_name>",
"certificatesigningrequest.certificates.k8s.io/<csr_name> approved",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION app1 Ready worker 47s v1.24.0+dc5a2fd controller1 Ready master,worker 2d22h v1.24.0+dc5a2fd",
"--- apiVersion: v1 kind: Secret metadata: name: controller1-bmc namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: controller1 namespace: openshift-machine-api spec: bmc: address: <protocol>://<bmc_url> 1 credentialsName: \"controller1-bmc\" bootMACAddress: <nic1_mac_address> customDeploy: method: install_coreos externallyProvisioned: true 2 online: true userData: name: controller-user-data-managed namespace: openshift-machine-api",
"oc create -f controller.yaml",
"secret/controller1-bmc created baremetalhost.metal3.io/controller1 created",
"oc get bmh -A",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 13s",
"oc adm drain app1 --force --ignore-daemonsets=true",
"node/app1 cordoned WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-tvthg, openshift-dns/dns- default-9q6rz, openshift-dns/node-resolver-zvt42, openshift-image-registry/node-ca-mzxth, openshift-ingress-cana ry/ingress-canary-qq5lf, openshift-machine-config-operator/machine-config-daemon-v79dm, openshift-monitoring/nod e-exporter-2vn59, openshift-multus/multus-additional-cni-plugins-wssvj, openshift-multus/multus-fn8tg, openshift -multus/network-metrics-daemon-5qv55, openshift-network-diagnostics/network-check-target-jqxn2, openshift-ovn-ku bernetes/ovnkube-node-rsvqg evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766965-258vp evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766950-kg5mk evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766935-stf4s pod/collect-profiles-27766965-258vp evicted pod/collect-profiles-27766950-kg5mk evicted pod/collect-profiles-27766935-stf4s evicted node/app1 drained",
"oc edit bmh -n openshift-machine-api <host_name>",
"customDeploy: method: install_coreos",
"oc get bmh -A",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 58m openshift-machine-api worker1 deprovisioning true 57m",
"oc delete bmh -n openshift-machine-api <bmh_name>",
"oc delete node <node_name>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION controller1 Ready master,worker 2d23h v1.24.0+dc5a2fd"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_bare_metal/scaling-a-user-provisioned-cluster-with-the-bare-metal-operator |
Chapter 1. Workflow for converting a virtualization cluster to a hyperconverged cluster | Chapter 1. Workflow for converting a virtualization cluster to a hyperconverged cluster Verify that your virtualization hosts use Red Hat Virtualization 4.4 or higher, and meet Red Hat Hyperconverged Infrastructure for Virtualization Support Requirements . Subscribe to software repositories . Convert virtualization hosts to hyperconverged hosts . Create Red Hat Gluster Storage volumes using storage on the converted host . | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/converting_a_virtualization_cluster_to_a_hyperconverged_cluster/workflow-convert-rhv-rhhi |
2.9.2. Performance Tuning With GFS2 | 2.9.2. Performance Tuning With GFS2 It is usually possible to alter the way in which a troublesome application stores its data in order to gain a considerable performance advantage. A typical example of a troublesome application is an email server. These are often laid out with a spool directory containing files for each user ( mbox ), or with a directory for each user containing a file for each message ( maildir ). When requests arrive over IMAP, the ideal arrangement is to give each user an affinity to a particular node. That way their requests to view and delete email messages will tend to be served from the cache on that one node. Obviously if that node fails, then the session can be restarted on a different node. When mail arrives by means of SMTP, then again the individual nodes can be set up so as to pass a certain user's mail to a particular node by default. If the default node is not up, then the message can be saved directly into the user's mail spool by the receiving node. Again this design is intended to keep particular sets of files cached on just one node in the normal case, but to allow direct access in the case of node failure. This setup allows the best use of GFS2's page cache and also makes failures transparent to the application, whether imap or smtp . Backup is often another tricky area. Again, if it is possible it is greatly preferable to back up the working set of each node directly from the node which is caching that particular set of inodes. If you have a backup script which runs at a regular point in time, and that seems to coincide with a spike in the response time of an application running on GFS2, then there is a good chance that the cluster may not be making the most efficient use of the page cache. Obviously, if you are in the (enviable) position of being able to stop the application in order to perform a backup, then this will not be a problem. On the other hand, if a backup is run from just one node, then after it has completed a large portion of the file system will be cached on that node, with a performance penalty for subsequent accesses from other nodes. This can be mitigated to a certain extent by dropping the VFS page cache on the backup node after the backup has completed with following command: However this is not as good a solution as taking care to ensure the working set on each node is either shared, mostly read only across the cluster, or accessed largely from a single node. | [
"echo -n 3 >/proc/sys/vm/drop_caches"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/gfs2_performance_tuning |
Chapter 4. Overview of image configuration | Chapter 4. Overview of image configuration The Image Configuration tests, also known as cloud/configuration , confirm that the image is configured in accordance with Red Hat standards so that customers have a uniform and consistent experience across multiple cloud providers and images in an integrated environment. The cloud/configuration test includes the following subtests: 4.1. Default system logging Confirms the default system logging service (syslog) is configured to store the logs in the /var/log/ directory of the image to allow quick issue resolution when needed. Success criteria Basic system logging is stored in /var/log/ directory on the image. 4.2. Network configuration test Network configuration confirms that the default firewall service (iptables) is running, port 22 is open with SSHD running, ports 80 and 443 are open or closed, and that all other ports are closed. This ensures that the image is protected from unauthorized access by default, with a known access configuration. This also ensures that customers have SSH access to the image and are able to quickly deploy HTTP applications without additional configuration. The image may have other ports open if they are necessary for proper operation of the cloud infrastructure but such ports must be documented. This test displays status (Pass) at runtime only if ports 22, 80 (optional), 443 (optional) are open on the image. If other ports are open, this test requests a description of the open ports for review at Red Hat to confirm success or failure. Note As part of the certification process, the Red Hat Certification application by default runs on port 8009. The Red Hat Certification application may also run on another open port during certification testing but it is recommended to open this port only during the testing and not as default in the configuration of an image. Success criteria Depending on the RHEL version, ensure that the following services are enabled and running: RHEL version Services RHEL 9 firewalld or nftables RHEL 8.3 and later firewalld or nftables RHEL 8 to 8.2 firewalld and nftables or firewalld and iptables sshd is enabled and running on port 22 and is accessible Any other ports open are required for proper operation of the cloud infrastructure and are documented Red Hat Certification application is running on port 8009 (or another port as configured) All other ports are closed Note The httpd service is allowed but not required to be running on port 80 and/or port 443. 4.3. Default OS runlevel Confirms that the current system runlevel is 3, 4, or 5. This subtest ensures that the image is operating in the desired mode/state with all the required system services (for example networking) running. Success criteria The current runlevel is 3, 4, or 5. Additional resources For more information about runlevels, see: RHEL 9 : Working with systemd targets . RHEL 8 : Working with systemd targets . 4.4. System services The system services confirms the root user can start and stop services on the system. This ensures that your customers who have system administration privileges can access/work with applications and services on the system and perform all the tasks which require administrative access in a seamless manner. The system services also ensures that there is no gap between the configured and actual state of the installed system services. Success criteria The root user can start and stop system services provided by the Red Hat product. For all the installed system services, actual status should match to their configured status. For instance if the service is enabled then it should be in running state. Additional resources For more information about gaining the required privileges, see: RHEL 9 : Managing sudo access . RHEL 8 : Managing sudo access . 4.5. Subscription services Confirms that the required Red Hat subscriptions are configured, available and working on the image and that the update mechanism is Red Hat Satellite or RHUI. This ensures that customers are able to obtain access to the packages and updates they need to support their applications through standard Red Hat package update or delivery mechanisms. Success criteria The image is configured and able to download, install, and upgrade a package from Red Hat Satellite or the RHUI subscription management services. | null | https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_policy_guide/assembly-image-configuration_cloud-image-pol-supportability-tests |
Chapter 15. Bean | Chapter 15. Bean Only producer is supported The Bean component binds beans to Camel message exchanges. 15.1. Dependencies When using bean with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-starter</artifactId> </dependency> 15.2. URI format Where beanID can be any string which is used to look up the bean in the Registry 15.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 15.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 15.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 15.4. Component Options The Bean component supports 4 options, which are listed below. Name Description Default Type cache (producer) Deprecated Use singleton option instead. true Boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean scope (producer) Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. Enum values: Singleton Request Prototype Singleton BeanScope autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 15.5. Endpoint Options The Bean endpoint is configured using URI syntax: with the following path and query parameters: 15.5.1. Path Parameters (1 parameters) Name Description Default Type beanName (common) Required Sets the name of the bean to invoke. String 15.5.2. Query Parameters (5 parameters) Name Description Default Type cache (common) Deprecated Use scope option instead. Boolean method (common) Sets the name of the method to invoke on the bean. String scope (common) Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using prototype scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. Enum values: Singleton Request Prototype Singleton BeanScope lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean parameters (advanced) Used for configuring additional properties on the bean. Map 15.6. Examples The object instance that is used to consume messages must be explicitly registered with the Registry. For example, if you are using Spring you must define the bean in the Spring configuration XML file. You can also register beans manually via Camel's Registry with the bind method. Once an endpoint has been registered, you can build Camel routes that use it to process exchanges. A bean: endpoint cannot be defined as the input to the route; i.e. you cannot consume from it, you can only route from some inbound message Endpoint to the bean endpoint as output. So consider using a direct: or queue: endpoint as the input. You can use the createProxy() methods on ProxyHelper to create a proxy that will generate exchanges and send them to any endpoint: And the same route using XML DSL: <route> <from uri="direct:hello"/> <to uri="bean:bye"/> </route> 15.7. Bean as endpoint Camel also supports invoking Bean as an Endpoint. What happens is that when the exchange is routed to the myBean Camel will use the Bean Binding to invoke the bean. The source for the bean is just a plain POJO. Camel will use Bean Binding to invoke the sayHello method, by converting the Exchange's In body to the String type and storing the output of the method on the Exchange Out body. 15.8. Java DSL bean syntax Java DSL comes with syntactic sugar for the component. Instead of specifying the bean explicitly as the endpoint (i.e. to("bean:beanName") ) you can use the following syntax: // Send message to the bean endpoint // and invoke method resolved using Bean Binding. from("direct:start").bean("beanName"); // Send message to the bean endpoint // and invoke given method. from("direct:start").bean("beanName", "methodName"); Instead of passing name of the reference to the bean (so that Camel will lookup for it in the registry), you can specify the bean itself: // Send message to the given bean instance. from("direct:start").bean(new ExampleBean()); // Explicit selection of bean method to be invoked. from("direct:start").bean(new ExampleBean(), "methodName"); // Camel will create the instance of bean and cache it for you. from("direct:start").bean(ExampleBean.class); 15.9. Bean Binding How bean methods to be invoked are chosen (if they are not specified explicitly through the method parameter) and how parameter values are constructed from the Message are all defined by the Bean Binding mechanism which is used throughout all of the various Bean Integration mechanisms in Camel. 15.10. Spring Boot Auto-Configuration The component supports 13 options, which are listed below. Name Description Default Type camel.component.bean.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.bean.enabled Whether to enable auto configuration of the bean component. This is enabled by default. Boolean camel.component.bean.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.bean.scope Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. BeanScope camel.component.class.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.class.enabled Whether to enable auto configuration of the class component. This is enabled by default. Boolean camel.component.class.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.class.scope Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. BeanScope camel.language.bean.enabled Whether to enable auto configuration of the bean language. This is enabled by default. Boolean camel.language.bean.scope Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using prototype scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. So when using prototype scope then this depends on the bean registry implementation. Singleton String camel.language.bean.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.component.bean.cache Deprecated Use singleton option instead. true Boolean camel.component.class.cache Deprecated Use singleton option instead. true Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-starter</artifactId> </dependency>",
"bean:beanName[?options]",
"bean:beanName",
"<route> <from uri=\"direct:hello\"/> <to uri=\"bean:bye\"/> </route>",
"// Send message to the bean endpoint // and invoke method resolved using Bean Binding. from(\"direct:start\").bean(\"beanName\"); // Send message to the bean endpoint // and invoke given method. from(\"direct:start\").bean(\"beanName\", \"methodName\");",
"// Send message to the given bean instance. from(\"direct:start\").bean(new ExampleBean()); // Explicit selection of bean method to be invoked. from(\"direct:start\").bean(new ExampleBean(), \"methodName\"); // Camel will create the instance of bean and cache it for you. from(\"direct:start\").bean(ExampleBean.class);"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-bean-component-starter |
Chapter 1. Preparing your environment for installation | Chapter 1. Preparing your environment for installation 1.1. System requirements The following requirements apply to the networked base operating system: x86_64 architecture The latest version of Red Hat Enterprise Linux 8 4-core 2.0 GHz CPU at a minimum A minimum of 12 GB RAM is required for Capsule Server to function. In addition, a minimum of 4 GB RAM of swap space is also recommended. Capsule running with less RAM than the minimum value might not operate correctly. A unique host name, which can contain lower-case letters, numbers, dots (.) and hyphens (-) A current Red Hat Satellite subscription Administrative user (root) access Full forward and reverse DNS resolution using a fully-qualified domain name Satellite only supports UTF-8 encoding. If your territory is USA and your language is English, set en_US.utf-8 as the system-wide locale settings. For more information about configuring system locale in Red Hat Enterprise Linux, see Configuring System Locale guide . Your Satellite must have the Red Hat Satellite Infrastructure Subscription manifest in your Customer Portal. Satellite must have satellite-capsule-6.x repository enabled and synced. To create, manage, and export a Red Hat Subscription Manifest in the Customer Portal, see Creating and managing manifests for a connected Satellite Server in Subscription Central . Satellite Server and Capsule Server do not support shortnames in the hostnames. When using custom certificates, the Common Name (CN) of the custom certificate must be a fully qualified domain name (FQDN) instead of a shortname. This does not apply to the clients of a Satellite. Before you install Capsule Server, ensure that your environment meets the requirements for installation. Warning The version of Capsule must match with the version of Satellite installed. It should not be different. For example, the Capsule version 6.15 cannot be registered with the Satellite version 6.14. Capsule Server must be installed on a freshly provisioned system that serves no other function except to run Capsule Server. The freshly provisioned system must not have the following users provided by external identity providers to avoid conflicts with the local users that Capsule Server creates: apache foreman-proxy postgres pulp puppet redis For more information on scaling your Capsule Servers, see Capsule Server scalability considerations . Certified hypervisors Capsule Server is fully supported on both physical systems and virtual machines that run on hypervisors that are supported to run Red Hat Enterprise Linux. For more information about certified hypervisors, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, Red Hat OpenShift Virtualization and Red Hat Enterprise Linux with KVM . SELinux mode SELinux must be enabled, either in enforcing or permissive mode. Installation with disabled SELinux is not supported. FIPS mode You can install Capsule on a Red Hat Enterprise Linux system that is operating in FIPS mode. You cannot enable FIPS mode after the installation of Capsule. For more information, see Switching RHEL to FIPS mode in Red Hat Enterprise Linux 8 Security hardening . Note Satellite supports DEFAULT and FIPS crypto-policies. The FUTURE crypto-policy is not supported for Satellite and Capsule installations. The FUTURE policy is a stricter forward-looking security level intended for testing a possible future policy. For more information, see Using system-wide cryptographic policies in the Red Hat Enterprise Linux guide. 1.2. Storage requirements The following table details storage requirements for specific directories. These values are based on expected use case scenarios and can vary according to individual environments. The runtime size was measured with Red Hat Enterprise Linux 6, 7, and 8 repositories synchronized. Table 1.1. Storage requirements for Capsule Server installation Directory Installation Size Runtime Size /var/lib/pulp 1 MB 300 GB /var/lib/pgsql 100 MB 20 GB /usr 3 GB Not Applicable /opt/puppetlabs 500 MB Not Applicable The size of the PostgreSQL database on your Capsule Server can grow significantly with an increasing number of lifecycle environments, content views, or repositories that are synchronized from your Satellite Server. In the largest Satellite environments, the size of /var/lib/pgsql on Capsule Server can grow to double or triple the size of /var/lib/pgsql on your Satellite Server. 1.3. Storage guidelines Consider the following guidelines when installing Capsule Server to increase efficiency. If you mount the /tmp directory as a separate file system, you must use the exec mount option in the /etc/fstab file. If /tmp is already mounted with the noexec option, you must change the option to exec and re-mount the file system. This is a requirement for the puppetserver service to work. Because most Capsule Server data is stored in the /var directory, mounting /var on LVM storage can help the system to scale. Use high-bandwidth, low-latency storage for the /var/lib/pulp/ directories. As Red Hat Satellite has many operations that are I/O intensive, using high latency, low-bandwidth storage causes performance degradation. Ensure your installation has a speed in the range 60 - 80 Megabytes per second. You can use the storage-benchmark script to get this data. For more information on using the storage-benchmark script, see Impact of Disk Speed on Satellite Operations . File system guidelines Do not use the GFS2 file system as the input-output latency is too high. Log file storage Log files are written to /var/log/messages/, /var/log/httpd/ , and /var/lib/foreman-proxy/openscap/content/ . You can manage the size of these files using logrotate . For more information, see How to use logrotate utility to rotate log files . The exact amount of storage you require for log messages depends on your installation and setup. SELinux considerations for NFS mount When the /var/lib/pulp directory is mounted using an NFS share, SELinux blocks the synchronization process. To avoid this, specify the SELinux context of the /var/lib/pulp directory in the file system table by adding the following lines to /etc/fstab : If NFS share is already mounted, remount it using the above configuration and enter the following command: Duplicated packages Packages that are duplicated in different repositories are only stored once on the disk. Additional repositories containing duplicate packages require less additional storage. The bulk of storage resides in the /var/lib/pulp/ directory. These end points are not manually configurable. Ensure that storage is available on the /var file system to prevent storage problems. Symbolic links You cannot use symbolic links for /var/lib/pulp/ . Synchronized RHEL ISO If you plan to synchronize RHEL content ISOs to Satellite, note that all minor versions of Red Hat Enterprise Linux also synchronize. You must plan to have adequate storage on your Satellite to manage this. 1.4. Supported operating systems You can install the operating system from a disc, local ISO image, kickstart, or any other method that Red Hat supports. Red Hat Capsule Server is supported on the latest version of Red Hat Enterprise Linux 8 that is available at the time when Capsule Server is installed. versions of Red Hat Enterprise Linux including EUS or z-stream are not supported. The following operating systems are supported by the installer, have packages, and are tested for deploying Satellite: Table 1.2. Operating systems supported by satellite-installer Operating System Architecture Notes Red Hat Enterprise Linux 8 x86_64 only Red Hat advises against using an existing system because the Satellite installer will affect the configuration of several components. Red Hat Capsule Server requires a Red Hat Enterprise Linux installation with the @Base package group with no other package-set modifications, and without third-party configurations or software not directly necessary for the direct operation of the server. This restriction includes hardening and other non-Red Hat security software. If you require such software in your infrastructure, install and verify a complete working Capsule Server first, then create a backup of the system before adding any non-Red Hat software. Do not register Capsule Server to the Red Hat Content Delivery Network (CDN). Red Hat does not support using the system for anything other than running Capsule Server. 1.5. Port and firewall requirements For the components of Satellite architecture to communicate, ensure that the required network ports are open and free on the base operating system. You must also ensure that the required network ports are open on any network-based firewalls. The installation of a Capsule Server fails if the ports between Satellite Server and Capsule Server are not open before installation starts. Use this information to configure any network-based firewalls. Note that some cloud solutions must be specifically configured to allow communications between machines because they isolate machines similarly to network-based firewalls. If you use an application-based firewall, ensure that the application-based firewall permits all applications that are listed in the tables and known to your firewall. If possible, disable the application checking and allow open port communication based on the protocol. Integrated Capsule Satellite Server has an integrated Capsule and any host that is directly connected to Satellite Server is a Client of Satellite in the context of this section. This includes the base operating system on which Capsule Server is running. Clients of Capsule Hosts which are clients of Capsules, other than Satellite's integrated Capsule, do not need access to Satellite Server. For more information on Satellite Topology, see Capsule Networking in Overview, concepts, and deployment considerations . Required ports can change based on your configuration. The following tables indicate the destination port and the direction of network traffic: Table 1.3. Capsule incoming traffic Destination Port Protocol Service Source Required For Description 53 TCP and UDP DNS DNS Servers and clients Name resolution DNS (optional) 67 UDP DHCP Client Dynamic IP DHCP (optional) 69 UDP TFTP Client TFTP Server (optional) 443, 80 TCP HTTPS, HTTP Client Content Retrieval Content 443, 80 TCP HTTPS, HTTP Client Content Host Registration Capsule CA RPM installation 443 TCP HTTPS Red Hat Satellite Content Mirroring Management 443 TCP HTTPS Red Hat Satellite Capsule API Smart Proxy functionality 443 TCP HTTPS Client Content Host registration Initiation Uploading facts Sending installed packages and traces 1883 TCP MQTT Client Pull based REX (optional) Content hosts for REX job notification (optional) 8000 TCP HTTP Client Provisioning templates Template retrieval for client installers, iPXE or UEFI HTTP Boot 8000 TCP HTTP Client PXE Boot Installation 8140 TCP HTTPS Client Puppet agent Client updates (optional) 8443 TCP HTTPS Client Content Host registration Deprecated and only needed for Client hosts deployed before upgrades 9090 TCP HTTPS Red Hat Satellite Capsule API Capsule functionality 9090 TCP HTTPS Client Register Endpoint Client registration with an external Capsule Server 9090 TCP HTTPS Client OpenSCAP Configure Client (if the OpenSCAP plugin is installed) 9090 TCP HTTPS Discovered Node Discovery Host discovery and provisioning (if the discovery plugin is installed) Any host that is directly connected to Satellite Server is a client in this context because it is a client of the integrated Capsule. This includes the base operating system on which a Capsule Server is running. A DHCP Capsule performs ICMP ping and TCP echo connection attempts to hosts in subnets with DHCP IPAM set to find out if an IP address considered for use is free. This behavior can be turned off using satellite-installer --foreman-proxy-dhcp-ping-free-ip=false . Table 1.4. Capsule outgoing traffic Destination Port Protocol Service Destination Required For Description ICMP ping Client DHCP Free IP checking (optional) 7 TCP echo Client DHCP Free IP checking (optional) 22 TCP SSH Target host Remote execution Run jobs 53 TCP and UDP DNS DNS Servers on the Internet DNS Server Resolve DNS records (optional) 53 TCP and UDP DNS DNS Server Capsule DNS Validation of DNS conflicts (optional) 68 UDP DHCP Client Dynamic IP DHCP (optional) 443 TCP HTTPS Satellite Capsule Capsule Configuration management Template retrieval OpenSCAP Remote Execution result upload 443 TCP HTTPS Red Hat Portal SOS report Assisting support cases (optional) 443 TCP HTTPS Satellite Content Sync 443 TCP HTTPS Satellite Client communication Forward requests from Client to Satellite 443 TCP HTTPS Infoblox DHCP Server DHCP management When using Infoblox for DHCP, management of the DHCP leases (optional) 623 Client Power management BMC On/Off/Cycle/Status 7911 TCP DHCP, OMAPI DHCP Server DHCP The DHCP target is configured using --foreman-proxy-dhcp-server and defaults to localhost ISC and remote_isc use a configurable port that defaults to 7911 and uses OMAPI 8443 TCP HTTPS Client Discovery Capsule sends reboot command to the discovered host (optional) Note ICMP to Port 7 UDP and TCP must not be rejected, but can be dropped. The DHCP Capsule sends an ECHO REQUEST to the Client network to verify that an IP address is free. A response prevents IP addresses from being allocated. 1.6. Enabling connections from Satellite Server and clients to a Capsule Server On the base operating system on which you want to install Capsule, you must enable incoming connections from Satellite Server and clients to Capsule Server and make these rules persistent across reboots. Procedure Open the ports for clients on Capsule Server: Allow access to services on Capsule Server: Make the changes persistent: Verification Enter the following command: For more information, see Using and Configuring firewalld in Red Hat Enterprise Linux 8 Securing networks . | [
"nfs.example.com:/nfsshare /var/lib/pulp nfs context=\"system_u:object_r:var_lib_t:s0\" 1 2",
"restorecon -R /var/lib/pulp",
"firewall-cmd --add-port=\"8000/tcp\" --add-port=\"9090/tcp\"",
"firewall-cmd --add-service=dns --add-service=dhcp --add-service=tftp --add-service=http --add-service=https --add-service=puppetmaster",
"firewall-cmd --runtime-to-permanent",
"firewall-cmd --list-all"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_capsule_server/preparing-environment-for-capsule-installation |
Chapter 2. Backing up the undercloud and the control plane nodes by using the Relax-and-Recover tool | Chapter 2. Backing up the undercloud and the control plane nodes by using the Relax-and-Recover tool You must back up your undercloud node and your control plane nodes when you upgrade or update your Red Hat Openstack Platform (RHOSP). You can backup your undercloud node and your control plane nodes using the Relax-and-Recover (ReaR) tool. To back up and restore your undercloud and your control plane nodes using the ReaR tool, you must complete the following procedures: Backing up the undercloud node Backing up the control plane nodes Restoring the undercloud and control plane nodes 2.1. Backing up the undercloud node by using the Relax-and-Recover tool To back up the undercloud node, you configure the backup node, install the Relax-and-Recover tool on the undercloud node, and then create the backup image. You can create backups as a part of your regular environment maintenance. In addition, you must back up the undercloud node before performing updates or upgrades. You can use the backups to restore the undercloud node to its state if an error occurs during an update or upgrade. 2.1.1. Supported backup formats and protocols The undercloud and backup and restore process uses the open-source tool Relax-and-Recover (ReaR) to create and restore bootable backup images. ReaR is written in Bash and supports multiple image formats and multiple transport protocols. The following list shows the backup formats and protocols that Red Hat OpenStack Platform supports when you use ReaR to back up and restore the undercloud and control plane. Bootable media formats ISO File transport protocols SFTP NFS 2.1.2. Configuring the backup storage location Before you create a backup of the control plane nodes, configure the backup storage location in the bar-vars.yaml environment file. This file stores the key-value parameters that you want to pass to the backup execution. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Create the bar-vars.yaml file: touch /home/stack/bar-vars.yaml In the bar-vars.yaml file, configure the backup storage location: If you use an NFS server, add the following parameters and set the values of the IP address of your NFS server and backup storage folder: tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_dir> Replace <ip_address> and <backup_dir> with the values that apply to your environment. By default, the tripleo_backup_and_restore_server parameter value is 192.168.24.1 . If you use an SFTP server, add the tripleo_backup_and_restore_output_url parameter and set the values of the URL and credentials of the SFTP server: tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/ tripleo_backup_and_restore_backup_url: iso:///backup/ Replace <user> , <password> , and <backup_node> with the backup node URL and credentials. 2.1.3. Optional: Configuring backup encryption You can encrypt backups as an additional security measure to protect sensitive data. Procedure In the bar-vars.yaml file, add the following parameters: tripleo_backup_and_restore_crypt_backup_enabled: true tripleo_backup_and_restore_crypt_backup_password: <password> Replace <password> with the password you want to use to encrypt the backup. 2.1.4. Installing and configuring an NFS server on the backup node You can install and configure a new NFS server to store the backup file. To install and configure an NFS server on the backup node, create an inventory file, create an SSH key, and run the openstack undercloud backup command with the NFS server options. Important If you previously installed and configured an NFS or SFTP server, you do not need to complete this procedure. You enter the server information when you set up ReaR on the node that you want to back up. By default, the Relax and Recover (ReaR) IP address parameter for the NFS server is 192.168.24.1 . You must add the parameter tripleo_backup_and_restore_server to set the IP address value that matches your environment. Procedure On the undercloud node, source the undercloud credentials: On the undercloud node, create an inventory file for the backup node: (undercloud) [stack@undercloud ~]USD cat <<'EOF'> ~/nfs-inventory.yaml [BackupNode] <backup_node> ansible_host=<ip_address> ansible_user=<user> EOF Replace <backup_node> , <ip_address> , and <user> with the values that apply to your environment. Copy the public SSH key from the undercloud node to the backup node. (undercloud) [stack@undercloud ~]USD ssh-copy-id -i ~/.ssh/id_rsa.pub <backup_node> Replace <backup_node> with the path and name of the backup node. Configure the NFS server on the backup node: (undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-nfs --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/nfs-inventory.yaml 2.1.5. Installing ReaR on the undercloud node Before you create a backup of the undercloud node, install and configure Relax and Recover (ReaR) on the undercloud. Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 2.1.4, "Installing and configuring an NFS server on the backup node" . Procedure On the undercloud node, source the undercloud credentials: [stack@undercloud-0 ~]USD source stackrc If you have not done so before, extract the static ansible inventory file from the location in which it was saved during installation: (undercloud) [stack@undercloud ~]USD cp ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml ~/tripleo-inventory.yaml Replace <stack> with the name of your stack. By default, the name of the stack is overcloud . Install ReaR on the undercloud node: (undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-rear --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml If your system uses the UEFI boot loader, perform the following steps on the undercloud node: Install the following tools: USD sudo dnf install dosfstools efibootmgr Enable UEFI backup in the ReaR configuration file located in /etc/rear/local.conf by replacing the USING_UEFI_BOOTLOADER parameter value 0 with the value 1 and adding UEFI_BOOTLOADER=/boot/efi/EFI/redhat/shimx64.efi . 2.1.6. Optional: Creating a standalone database backup of the undercloud nodes You can include standalone undercloud database backups in your routine backup schedule to provide additional data security. A full backup of an undercloud node includes a database backup of the undercloud node. But if a full undercloud restoration fails, you might lose access to the database portion of the full undercloud backup. In this case, you can recover the database from a standalone undercloud database backup. You can create a standalone undercloud database backup in conjunction with the ReaR tool and the Snapshot and Revert tool. However, it is recommended that you back up the entire undercloud. For more information about creating a backup of the undercloud node, see Creating a backup of the undercloud node . Procedure Create a database backup of the undercloud nodes: openstack undercloud backup --db-only The db backup file is stored in /home/stack with the name openstack-backup-mysql-<timestamp>.sql . Additional resources Section 2.1.8, "Creating a backup of the undercloud node" Section 2.3.4, "Restoring the undercloud node database manually" 2.1.7. Configuring Open vSwitch (OVS) interfaces for backup If you use an Open vSwitch (OVS) bridge in your environment, you must manually configure the OVS interfaces before you create a backup of the undercloud or control plane nodes. The restoration process uses this information to restore the network interfaces. Procedure In the /etc/rear/local.conf file, add the NETWORKING_PREPARATION_COMMANDS parameter in the following format: Replace <command_1> and <command_2> with commands that configure the network interfaces. For example, you can add the ip link add br-ctlplane type bridge command to create the control plane bridge or add the ip link set eth0 up command to change the state of eth0 to up. You can add more commands to the parameter based on your network configuration. For example, if your undercloud has the following configuration: The NETWORKING_PREPARATION_COMMANDS parameter is formatted as follows: 2.1.8. Creating a backup of the undercloud node To create a backup of the undercloud node, use the openstack undercloud backup command. You can then use the backup to restore the undercloud node to its state in case the node becomes corrupted or inaccessible. The backup of the undercloud node includes the backup of the database that runs on the undercloud node. Note It is recommended that you create a backup of the undercloud node by using the following procedure. However, if you completed Creating a standalone database backup of the undercloud nodes , you can skip this procedure. Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 2.1.4, "Installing and configuring an NFS server on the backup node" . You have installed ReaR on the undercloud node. For more information, see Section 2.1.5, "Installing ReaR on the undercloud node" . If you use an OVS bridge for your network interfaces, you have configured the OVS interfaces. For more information, see Section 2.1.7, "Configuring Open vSwitch (OVS) interfaces for backup" . Procedure Log in to the undercloud as the stack user. Retrieve the MySQL root password: [stack@undercloud ~]USD PASSWORD=USD(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password) Create a database backup of the undercloud node: [stack@undercloud ~]USD sudo podman exec mysql bash -c "mysqldump -uroot -pUSDPASSWORD --opt --all-databases" | sudo tee /root/undercloud-all-databases.sql On the undercloud node, source the undercloud credentials: [stack@undercloud-0 ~]USD source stackrc Create a backup of the undercloud node: (undercloud) [stack@undercloud ~]USD openstack undercloud backup --inventory /home/stack/tripleo-inventory.yaml 2.1.9. Scheduling undercloud node backups with cron You can schedule backups of the undercloud nodes with ReaR by using the Ansible backup-and-restore role. You can view the logs in the /var/log/rear-cron directory. Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 2.1.4, "Installing and configuring an NFS server on the backup node" . You have installed ReaR on the undercloud and control plane nodes. For more information, see Section 2.2.3, "Installing ReaR on the control plane nodes" . You have sufficient available disk space at your backup location to store the backup. Procedure To schedule a backup of your control plane nodes, run the following command. The default schedule is Sundays at midnight: openstack undercloud backup --cron Optional: Customize the scheduled backup according to your deployment: To change the default backup schedule, pass a different cron schedule on the tripleo_backup_and_restore_cron parameter: openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron": "0 0 * * 0"}' To define additional parameters that are added to the backup command when cron runs the scheduled backup, pass the tripleo_backup_and_restore_cron_extra parameter to the backup command, as shown in the following example: openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_extra":"--extra-vars bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml"}' To change the default user that executes the backup, pass the tripleo_backup_and_restore_cron_user parameter to the backup command, as shown in the following example: openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_user": "root"} 2.2. Backing up the control plane nodes by using the Relax-and-Recover tool To back up the control plane nodes, you configure the backup node, install the Relax-and-Recover tool on the control plane nodes, and create the backup image. You can create backups as a part of your regular environment maintenance. In addition, you must back up the control plane nodes before performing updates or upgrades. You can use the backups to restore the control plane nodes to their state if an error occurs during an update or upgrade. The backup process causes the failover of services that are managed by pacemaker. For this reason, you must run it during planned maintenance and expect some transactions to be lost. 2.2.1. Supported backup formats and protocols The undercloud and backup and restore process uses the open-source tool Relax-and-Recover (ReaR) to create and restore bootable backup images. ReaR is written in Bash and supports multiple image formats and multiple transport protocols. The following list shows the backup formats and protocols that Red Hat OpenStack Platform supports when you use ReaR to back up and restore the undercloud and control plane. Bootable media formats ISO File transport protocols SFTP NFS 2.2.2. Installing and configuring an NFS server on the backup node You can install and configure a new NFS server to store the backup file. To install and configure an NFS server on the backup node, create an inventory file, create an SSH key, and run the openstack undercloud backup command with the NFS server options. Important If you previously installed and configured an NFS or SFTP server, you do not need to complete this procedure. You enter the server information when you set up ReaR on the node that you want to back up. By default, the Relax and Recover (ReaR) IP address parameter for the NFS server is 192.168.24.1 . You must add the parameter tripleo_backup_and_restore_server to set the IP address value that matches your environment. Procedure On the undercloud node, source the undercloud credentials: On the undercloud node, create an inventory file for the backup node: (undercloud) [stack@undercloud ~]USD cat <<'EOF'> ~/nfs-inventory.yaml [BackupNode] <backup_node> ansible_host=<ip_address> ansible_user=<user> EOF Replace <backup_node> , <ip_address> , and <user> with the values that apply to your environment. Copy the public SSH key from the undercloud node to the backup node. (undercloud) [stack@undercloud ~]USD ssh-copy-id -i ~/.ssh/id_rsa.pub <backup_node> Replace <backup_node> with the path and name of the backup node. Configure the NFS server on the backup node: (undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-nfs --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/nfs-inventory.yaml 2.2.3. Installing ReaR on the control plane nodes Before you create a backup of the control plane nodes, install and configure Relax and Recover (ReaR) on each of the control plane nodes. Important Due to a known issue, the ReaR backup of overcloud nodes continues even if a Controller node is down. Ensure that all your Controller nodes are running before you run the ReaR backup. A fix is planned for a later Red Hat OpenStack Platform (RHOSP) release. For more information, see BZ#2077335 - Back up of the overcloud ctlplane keeps going even if one controller is unreachable . Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 2.2.2, "Installing and configuring an NFS server on the backup node" . Procedure On the undercloud node, source the undercloud credentials: [stack@undercloud-0 ~]USD source stackrc If you have not done so before, extract the static ansible inventory file from the location in which it was saved during installation: (undercloud) [stack@undercloud ~]USD cp ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml ~/tripleo-inventory.yaml Replace <stack> with the name of your stack. By default, the name of the stack is overcloud . In the bar-vars.yaml file, configure the backup storage location: If you installed and configured your own NFS server, add the tripleo_backup_and_restore_server parameter and set the value to the IP address of your NFS server: tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_dir> Replace <ip_address> and <backup_dir> with the values that apply to your environment. By default, the tripleo_backup_and_restore_server parameter value is 192.168.24.1 .* If you use an SFTP server, add the tripleo_backup_and_restore_output_url parameter and set the values of the URL and credentials of the SFTP server: tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/ tripleo_backup_and_restore_backup_url: iso:///backup/ Replace <user> , <password> , and <backup_node> with the backup node URL and credentials. Install ReaR on the control plane nodes: (undercloud) [stack@undercloud ~]USD openstack overcloud backup --setup-rear --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml If your system uses the UEFI boot loader, perform the following steps on the control plane nodes: Install the following tools: USD sudo dnf install dosfstools efibootmgr Enable UEFI backup in the ReaR configuration file located in /etc/rear/local.conf by replacing the USING_UEFI_BOOTLOADER parameter value 0 with the value 1 and adding UEFI_BOOTLOADER=/boot/efi/EFI/redhat/shimx64.efi . 2.2.4. Configuring Open vSwitch (OVS) interfaces for backup If you use an Open vSwitch (OVS) bridge in your environment, you must manually configure the OVS interfaces before you create a backup of the undercloud or control plane nodes. The restoration process uses this information to restore the network interfaces. Procedure In the /etc/rear/local.conf file, add the NETWORKING_PREPARATION_COMMANDS parameter in the following format: Replace <command_1> and <command_2> with commands that configure the network interfaces. For example, you can add the ip link add br-ctlplane type bridge command to create the control plane bridge or add the ip link set eth0 up command to change the state of eth0 to up. You can add more commands to the parameter based on your network configuration. For example, if your undercloud has the following configuration: The NETWORKING_PREPARATION_COMMANDS parameter is formatted as follows: 2.2.5. Creating a backup of the control plane nodes To create a backup of the control plane nodes, use the openstack overcloud backup command. You can then use the backup to restore the control plane nodes to their state in case the nodes become corrupted or inaccessible. The backup of the control plane nodes includes the backup of the database that runs on the control plane nodes. Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 2.2.2, "Installing and configuring an NFS server on the backup node" . You have installed ReaR on the control plane nodes. For more information, see Section 2.2.3, "Installing ReaR on the control plane nodes" . If you use an OVS bridge for your network interfaces, you have configured the OVS interfaces. For more information, see Section 2.2.4, "Configuring Open vSwitch (OVS) interfaces for backup" . Procedure Locate the config-drive partition on each control plane node: On each control plane node, back up the config-drive partition of each node as the root user: [root@controller-x ~]# dd if=<config_drive_partition> of=/mnt/config-drive Replace <config_drive_partition> with the name of the config-drive partition that you located in step 1. On the undercloud node, source the undercloud credentials: [stack@undercloud-0 ~]USD source stackrc Create a backup of the control plane nodes: (undercloud) [stack@undercloud ~]USD openstack overcloud backup --inventory /home/stack/tripleo-inventory.yaml The backup process runs sequentially on each control plane node without disrupting the service to your environment. 2.2.6. Scheduling control plane node backups with cron You can schedule backups of the control plane nodes with ReaR by using the Ansible backup-and-restore role. You can view the logs in the /var/log/rear-cron directory. Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 2.1.4, "Installing and configuring an NFS server on the backup node" . You have installed ReaR on the undercloud and control plane nodes. For more information, see Section 2.2.3, "Installing ReaR on the control plane nodes" . You have sufficient available disk space at your backup location to store the backup. Procedure To schedule a backup of your control plane nodes, run the following command. The default schedule is Sundays at midnight: openstack overcloud backup --cron Optional: Customize the scheduled backup according to your deployment: To change the default backup schedule, pass a different cron schedule on the tripleo_backup_and_restore_cron parameter: openstack overcloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron": "0 0 * * 0"}' To define additional parameters that are added to the backup command when cron runs the scheduled backup, pass the tripleo_backup_and_restore_cron_extra parameter to the backup command, as shown in the following example: openstack overcloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_extra":"--extra-vars bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml"}' To change the default user that executes the backup, pass the tripleo_backup_and_restore_cron_user parameter to the backup command, as shown in the following example: openstack overcloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_user": "root"} 2.3. Restoring the undercloud node and control plane nodes by using the Relax-and-Recover tool If your undercloud or control plane nodes become corrupted or if an error occurs during an update or upgrade, you can restore the undercloud or overcloud control plane nodes from a backup to their state. If the restore process fails to automatically restore the Galera cluster or nodes with colocated Ceph monitors, you can restore these components manually. 2.3.1. Restoring the undercloud node You can restore the undercloud node to its state using the backup ISO image that you created using ReaR. You can find the backup ISO images on the backup node. Burn the bootable ISO image to a DVD or download it to the undercloud node through Integrated Lights-Out (iLO) remote access. Prerequisites You have created a backup of the undercloud node. For more information, see Section 2.1.8, "Creating a backup of the undercloud node" . You have access to the backup node. If you use an OVS bridge for your network interfaces, you have access to the network configuration information that you set in the NETWORKING_PREPARATION_COMMANDS parameter. For more information, see see Section 2.1.7, "Configuring Open vSwitch (OVS) interfaces for backup" . If you configured backup encryption, you must decrypt the backup before you begin the restoration process. Run the following decrypt step in the system where the backup file is located: USD dd if=backup.tar.gz | /usr/bin/openssl des3 -d -k "<encryption key>" | tar -C <backup_location> -xzvf - '*.conf' Replace <encryption key> with your encryption key. Replace <backup_location> with the folder in which you want to save the backup.tar.gz file, for example, /ctl_plane_backups/undercloud-0/ . Procedure Power off the undercloud node. Ensure that the undercloud node is powered off completely before you proceed. Boot the undercloud node with the backup ISO image. When the Relax-and-Recover boot menu displays, select Recover <undercloud_node> . Replace <undercloud_node> with the name of your undercloud node. Note If your system uses UEFI, select the Relax-and-Recover (no Secure Boot) option. Log in as the root user and restore the node: The following message displays: Welcome to Relax-and-Recover. Run "rear recover" to restore your system! RESCUE <undercloud_node>:~ # rear recover When the undercloud node restoration process completes, the console displays the following message: Finished recovering your system Exiting rear recover Running exit tasks Power off the node: RESCUE <undercloud_node>:~ # poweroff On boot up, the node resumes its state. 2.3.2. Restoring the control plane nodes If an error occurs during an update or upgrade, you can restore the control plane nodes to their state using the backup ISO image that you have created using ReaR. To restore the control plane, you must restore all control plane nodes to ensure state consistency. You can find the backup ISO images on the backup node. Burn the bootable ISO image to a DVD or download it to the undercloud node through Integrated Lights-Out (iLO) remote access. Note Red Hat supports backups of Red Hat OpenStack Platform with native SDNs, such as Open vSwitch (OVS) and the default Open Virtual Network (OVN). For information about third-party SDNs, refer to the third-party SDN documentation. Prerequisites You have created a backup of the control plane nodes. For more information, see Section 2.2.5, "Creating a backup of the control plane nodes" . You have access to the backup node. If you use an OVS bridge for your network interfaces, you have access to the network configuration information that you set in the NETWORKING_PREPARATION_COMMANDS parameter. For more information, see see Section 2.2.4, "Configuring Open vSwitch (OVS) interfaces for backup" . Procedure Power off each control plane node. Ensure that the control plane nodes are powered off completely before you proceed. Boot each control plane node with the corresponding backup ISO image. When the Relax-and-Recover boot menu displays, on each control plane node, select Recover <control_plane_node> . Replace <control_plane_node> with the name of the corresponding control plane node. Note If your system uses UEFI, select the Relax-and-Recover (no Secure Boot) option. On each control plane node, log in as the root user and restore the node: The following message displays: When the control plane node restoration process completes, the console displays the following message: When the command line console is available, restore the config-drive partition of each control plane node: # once completed, restore the config-drive partition (which is ISO9660) RESCUE <control_plane_node>:~ USD dd if=/mnt/local/mnt/config-drive of=<config_drive_partition> Power off the node: Set the boot sequence to the normal boot device. On boot up, the node resumes its state. To ensure that the services are running correctly, check the status of pacemaker. Log in to a Controller node as the root user and enter the following command: To view the status of the overcloud, use the OpenStack Integration Test Suite (tempest). For more information, see Validating your OpenStack cloud with the Integration Test Suite (tempest) . Troubleshooting Clear resource alarms that are displayed by pcs status by running the following command: Clear STONITH fencing action errors that are displayed by pcs status by running the following commands: 2.3.3. Restoring the Galera cluster manually If the Galera cluster does not restore as part of the restoration procedure, you must restore Galera manually. Note In this procedure, you must perform some steps on one Controller node. Ensure that you perform these steps on the same Controller node as you go through the procedure. Procedure On Controller-0 , retrieve the Galera cluster virtual IP: USD sudo hiera -c /etc/puppet/hiera.yaml mysql_vip Disable the database connections through the virtual IP on all Controller nodes: USD sudo iptables -I INPUT -p tcp --destination-port 3306 -d USDMYSQL_VIP=<galera_cluster_vip> -j DROP Replace <galera_cluster_vip> with the IP address you retrieved in step 1. On Controller-0 , retrieve the MySQL root password: USD sudo hiera -c /etc/puppet/hiera.yaml mysql::server::root_password On Controller-0 , set the Galera resource to unmanaged mode: USD sudo pcs resource unmanage galera-bundle Stop the MySQL containers on all Controller nodes: USD sudo podman container stop USD(sudo podman container ls --all --format "{{.Names}}" --filter=name=galera-bundle) Move the current directory on all Controller nodes: USD sudo mv /var/lib/mysql /var/lib/mysql-save Create the new directory /var/lib/mysq on all Controller nodes: USD sudo mkdir /var/lib/mysql USD sudo chown 42434:42434 /var/lib/mysql USD sudo chcon -t container_file_t /var/lib/mysql USD sudo chmod 0755 /var/lib/mysql USD sudo chcon -r object_r /var/lib/mysql USD sudo chcon -u system_u /var/lib/mysql Start the MySQL containers on all Controller nodes: USD sudo podman container start USD(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) Create the MySQL database on all Controller nodes: USD sudo podman exec -i USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql_install_db --datadir=/var/lib/mysql --user=mysql --log_error=/var/log/mysql/mysql_init.log" Start the database on all Controller nodes: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysqld_safe --skip-networking --wsrep-on=OFF --log-error=/var/log/mysql/mysql_safe.log" & Move the .my.cnf Galera configuration file on all Controller nodes: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mv /root/.my.cnf /root/.my.cnf.bck" Reset the Galera root password on all Controller nodes: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -uroot -e'use mysql;set password for root@localhost = password(\"USDROOTPASSWORD\");flush privileges;'" Restore the .my.cnf Galera configuration file inside the Galera container on all Controller nodes: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mv /root/.my.cnf.bck /root/.my.cnf" On Controller-0 , copy the backup database files to /var/lib/MySQL : USD sudo cp openstack-backup-mysql.sql /var/lib/mysql USD sudo cp openstack-backup-mysql-grants.sql /var/lib/mysql Note The path to these files is /home/tripleo-admin/. On Controller-0 , restore the MySQL database: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -u root -pUSDROOT_PASSWORD < \"/var/lib/mysql/USDBACKUP_FILE\" " USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -u root -pUSDROOT_PASSWORD < \"/var/lib/mysql/USDBACKUP_GRANT_FILE\" " Shut down the databases on all Controller nodes: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysqladmin shutdown" On Controller-0 , start the bootstrap node: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) \ /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql \ --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 \ --wsrep-cluster-address=gcomm:// & Verification: On Controller-0, check the status of the cluster: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "clustercheck" Ensure that the following message is displayed: "Galera cluster node is synced", otherwise you must recreate the node. On Controller-0 , retrieve the cluster address from the configuration: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "grep wsrep_cluster_address /etc/my.cnf.d/galera.cnf" | awk '{print USD3}' On each of the remaining Controller nodes, start the database and validate the cluster: Start the database: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock \ --datadir=/var/lib/mysql --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 \ --wsrep-cluster-address=USDCLUSTER_ADDRESS & Check the status of the MYSQL cluster: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "clustercheck" Ensure that the following message is displayed: "Galera cluster node is synced", otherwise you must recreate the node. Stop the MySQL container on all Controller nodes: USD sudo podman exec USD(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) \ /usr/bin/mysqladmin -u root shutdown On all Controller nodes, remove the following firewall rule to allow database connections through the virtual IP address: USD sudo iptables -D INPUT -p tcp --destination-port 3306 -d <galera_cluster_vip> -j DROP Replace <galera_cluster_vip> with the IP address you retrieved in step 1. Restart the MySQL container on all Controller nodes: USD sudo podman container restart USD(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) Restart the clustercheck container on all Controller nodes: USD sudo podman container restart USD(sudo podman container ls --all --format "{{ .Names }}" --filter=name=clustercheck) On Controller-0 , set the Galera resource to managed mode: USD sudo pcs resource manage galera-bundle Verification To ensure that services are running correctly, check the status of pacemaker: USD sudo pcs status To view the status of the overcloud, use the OpenStack Integration Test Suite (tempest). For more information, see Validating your OpenStack cloud with the Integration Test Suite (tempest) . If you suspect an issue with a particular node, check the state of the cluster with clustercheck : USD sudo podman exec clustercheck /usr/bin/clustercheck 2.3.4. Restoring the undercloud node database manually If the undercloud database does not restore as part of the undercloud restore process, you can restore the database manually. You can only restore the database if you previously created a standalone database backup. Prerequisites You have created a standalone backup of the undercloud database. For more information, see Section 2.1.6, "Optional: Creating a standalone database backup of the undercloud nodes" . Procedure Log in to the director undercloud node as the root user. Stop all tripleo services: [root@director ~]# systemctl stop tripleo_* Ensure that no containers are running on the server by entering the following command: [root@director ~]# podman ps If any containers are running, enter the following command to stop the containers: Create a backup of the current /var/lib/mysql directory and then delete the directory: [root@director ~]# cp -a /var/lib/mysql /var/lib/mysql_bck [root@director ~]# rm -rf /var/lib/mysql Recreate the database directory and set the SELinux attributes for the new directory: [root@director ~]# mkdir /var/lib/mysql [root@director ~]# chown 42434:42434 /var/lib/mysql [root@director ~]# chmod 0755 /var/lib/mysql [root@director ~]# chcon -t container_file_t /var/lib/mysql [root@director ~]# chcon -r object_r /var/lib/mysql [root@director ~]# chcon -u system_u /var/lib/mysql Create a local tag for the mariadb image. Replace <image_id> and <undercloud.ctlplane.example.com> with the values applicable in your environment: [root@director ~]# podman images | grep mariadb <undercloud.ctlplane.example.com>:8787/rh-osbs/rhosp16-openstack-mariadb 16.2_20210322.1 <image_id> 3 weeks ago 718 MB [root@director ~]# podman tag <image_id> mariadb [root@director ~]# podman images | grep maria localhost/mariadb latest <image_id> 3 weeks ago 718 MB <undercloud.ctlplane.example.com>:8787/rh-osbs/rhosp16-openstack-mariadb 16.2_20210322.1 <image_id> 3 weeks ago 718 MB Initialize the /var/lib/mysql directory with the container: [root@director ~]# podman run --net=host -v /var/lib/mysql:/var/lib/mysql localhost/mariadb mysql_install_db --datadir=/var/lib/mysql --user=mysql Note You can ignore the following error messages, which are generated because the directories do not exist in the container image used in RHOSP. Copy the database backup file that you want to import to the database: [root@director ~]# cp /root/undercloud-all-databases.sql /var/lib/mysql Start the database service to import the data: [root@director ~]# podman run --net=host -dt -v /var/lib/mysql:/var/lib/mysql localhost/mariadb /usr/libexec/mysqld Import the data and configure the max_allowed_packet parameter: Log in to the container and configure it: [root@director ~]# podman exec -it <container_id> /bin/bash ()[mysql@5a4e429c6f40 /]USD mysql -u root -e "set global max_allowed_packet = 1073741824;" ()[mysql@5a4e429c6f40 /]USD mysql -u root < /var/lib/mysql/undercloud-all-databases.sql ()[mysql@5a4e429c6f40 /]USD mysql -u root -e 'flush privileges' ()[mysql@5a4e429c6f40 /]USD exit exit Stop the container: [root@director ~]# podman stop <container_id> Check that no containers are running: [root@director ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@director ~]# Restart all tripleo services: [root@director ~]# systemctl start multi-user.target | [
"source ~/stackrc",
"touch /home/stack/bar-vars.yaml",
"tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_dir>",
"tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/ tripleo_backup_and_restore_backup_url: iso:///backup/",
"tripleo_backup_and_restore_crypt_backup_enabled: true tripleo_backup_and_restore_crypt_backup_password: <password>",
"[stack@undercloud-0 ~]USD source stackrc (undercloud) [stack@undercloud ~]USD",
"(undercloud) [stack@undercloud ~]USD cat <<'EOF'> ~/nfs-inventory.yaml [BackupNode] <backup_node> ansible_host=<ip_address> ansible_user=<user> EOF",
"(undercloud) [stack@undercloud ~]USD ssh-copy-id -i ~/.ssh/id_rsa.pub <backup_node>",
"(undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-nfs --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/nfs-inventory.yaml",
"[stack@undercloud-0 ~]USD source stackrc",
"(undercloud) [stack@undercloud ~]USD cp ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml ~/tripleo-inventory.yaml",
"(undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-rear --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml",
"sudo dnf install dosfstools efibootmgr",
"openstack undercloud backup --db-only",
"NETWORKING_PREPARATION_COMMANDS=('<command_1>' '<command_2>' ...')",
"ip -4 addr ls br-ctlplane 8: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 inet 172.16.9.1/24 brd 172.16.9.255 scope global br-ctlplane valid_lft forever preferred_lft forever sudo ovs-vsctl show Bridge br-ctlplane Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: system Port eth0 Interface eth0 Port br-ctlplane Interface br-ctlplane type: internal Port phy-br-ctlplane Interface phy-br-ctlplane type: patch options: {peer=int-br-ctlplane}",
"NETWORKING_PREPARATION_COMMANDS=('ip link add br-ctlplane type bridge' 'ip link set br-ctlplane up' 'ip link set eth0 up' 'ip link set eth0 master br-ctlplane' 'ip addr add 172.16.9.1/24 dev br-ctlplane')",
"[stack@undercloud ~]USD PASSWORD=USD(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)",
"[stack@undercloud ~]USD sudo podman exec mysql bash -c \"mysqldump -uroot -pUSDPASSWORD --opt --all-databases\" | sudo tee /root/undercloud-all-databases.sql",
"[stack@undercloud-0 ~]USD source stackrc",
"(undercloud) [stack@undercloud ~]USD openstack undercloud backup --inventory /home/stack/tripleo-inventory.yaml",
"openstack undercloud backup --cron",
"openstack undercloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron\": \"0 0 * * 0\"}'",
"openstack undercloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron_extra\":\"--extra-vars bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml\"}'",
"openstack undercloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron_user\": \"root\"}",
"[stack@undercloud-0 ~]USD source stackrc (undercloud) [stack@undercloud ~]USD",
"(undercloud) [stack@undercloud ~]USD cat <<'EOF'> ~/nfs-inventory.yaml [BackupNode] <backup_node> ansible_host=<ip_address> ansible_user=<user> EOF",
"(undercloud) [stack@undercloud ~]USD ssh-copy-id -i ~/.ssh/id_rsa.pub <backup_node>",
"(undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-nfs --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/nfs-inventory.yaml",
"[stack@undercloud-0 ~]USD source stackrc",
"(undercloud) [stack@undercloud ~]USD cp ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml ~/tripleo-inventory.yaml",
"tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_dir>",
"tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/ tripleo_backup_and_restore_backup_url: iso:///backup/",
"(undercloud) [stack@undercloud ~]USD openstack overcloud backup --setup-rear --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml",
"sudo dnf install dosfstools efibootmgr",
"NETWORKING_PREPARATION_COMMANDS=('<command_1>' '<command_2>' ...')",
"ip -4 addr ls br-ctlplane 8: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 inet 172.16.9.1/24 brd 172.16.9.255 scope global br-ctlplane valid_lft forever preferred_lft forever sudo ovs-vsctl show Bridge br-ctlplane Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: system Port eth0 Interface eth0 Port br-ctlplane Interface br-ctlplane type: internal Port phy-br-ctlplane Interface phy-br-ctlplane type: patch options: {peer=int-br-ctlplane}",
"NETWORKING_PREPARATION_COMMANDS=('ip link add br-ctlplane type bridge' 'ip link set br-ctlplane up' 'ip link set eth0 up' 'ip link set eth0 master br-ctlplane' 'ip addr add 172.16.9.1/24 dev br-ctlplane')",
"[stack@undercloud-0 ~]USD blkid -t LABEL=\"config-2\" -odevice",
"dd if=<config_drive_partition> of=/mnt/config-drive",
"[stack@undercloud-0 ~]USD source stackrc",
"(undercloud) [stack@undercloud ~]USD openstack overcloud backup --inventory /home/stack/tripleo-inventory.yaml",
"openstack overcloud backup --cron",
"openstack overcloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron\": \"0 0 * * 0\"}'",
"openstack overcloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron_extra\":\"--extra-vars bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml\"}'",
"openstack overcloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron_user\": \"root\"}",
"dd if=backup.tar.gz | /usr/bin/openssl des3 -d -k \"<encryption key>\" | tar -C <backup_location> -xzvf - '*.conf'",
"Welcome to Relax-and-Recover. Run \"rear recover\" to restore your system! RESCUE <undercloud_node>:~ # rear recover",
"Finished recovering your system Exiting rear recover Running exit tasks",
"RESCUE <undercloud_node>:~ # poweroff",
"Welcome to Relax-and-Recover. Run \"rear recover\" to restore your system! RESCUE <control_plane_node>:~ # rear recover",
"Finished recovering your system Exiting rear recover Running exit tasks",
"once completed, restore the config-drive partition (which is ISO9660) RESCUE <control_plane_node>:~ USD dd if=/mnt/local/mnt/config-drive of=<config_drive_partition>",
"RESCUE <control_plane_node>:~ # poweroff",
"pcs status",
"pcs resource clean",
"pcs resource clean pcs stonith history cleanup",
"sudo hiera -c /etc/puppet/hiera.yaml mysql_vip",
"sudo iptables -I INPUT -p tcp --destination-port 3306 -d USDMYSQL_VIP=<galera_cluster_vip> -j DROP",
"sudo hiera -c /etc/puppet/hiera.yaml mysql::server::root_password",
"sudo pcs resource unmanage galera-bundle",
"sudo podman container stop USD(sudo podman container ls --all --format \"{{.Names}}\" --filter=name=galera-bundle)",
"sudo mv /var/lib/mysql /var/lib/mysql-save",
"sudo mkdir /var/lib/mysql sudo chown 42434:42434 /var/lib/mysql sudo chcon -t container_file_t /var/lib/mysql sudo chmod 0755 /var/lib/mysql sudo chcon -r object_r /var/lib/mysql sudo chcon -u system_u /var/lib/mysql",
"sudo podman container start USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle)",
"sudo podman exec -i USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"mysql_install_db --datadir=/var/lib/mysql --user=mysql --log_error=/var/log/mysql/mysql_init.log\"",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"mysqld_safe --skip-networking --wsrep-on=OFF --log-error=/var/log/mysql/mysql_safe.log\" &",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"mv /root/.my.cnf /root/.my.cnf.bck\"",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"mysql -uroot -e'use mysql;set password for root@localhost = password(\\\"USDROOTPASSWORD\\\");flush privileges;'\"",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"mv /root/.my.cnf.bck /root/.my.cnf\"",
"sudo cp openstack-backup-mysql.sql /var/lib/mysql sudo cp openstack-backup-mysql-grants.sql /var/lib/mysql",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"mysql -u root -pUSDROOT_PASSWORD < \\\"/var/lib/mysql/USDBACKUP_FILE\\\" \" sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"mysql -u root -pUSDROOT_PASSWORD < \\\"/var/lib/mysql/USDBACKUP_GRANT_FILE\\\" \"",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"mysqladmin shutdown\"",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 --wsrep-cluster-address=gcomm:// &",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"clustercheck\"",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"grep wsrep_cluster_address /etc/my.cnf.d/galera.cnf\" | awk '{print USD3}'",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 --wsrep-cluster-address=USDCLUSTER_ADDRESS &",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) bash -c \"clustercheck\"",
"sudo podman exec USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle) /usr/bin/mysqladmin -u root shutdown",
"sudo iptables -D INPUT -p tcp --destination-port 3306 -d <galera_cluster_vip> -j DROP",
"sudo podman container restart USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=galera-bundle)",
"sudo podman container restart USD(sudo podman container ls --all --format \"{{ .Names }}\" --filter=name=clustercheck)",
"sudo pcs resource manage galera-bundle",
"sudo pcs status",
"sudo podman exec clustercheck /usr/bin/clustercheck",
"systemctl stop tripleo_*",
"podman ps",
"podman stop <container_name>",
"cp -a /var/lib/mysql /var/lib/mysql_bck rm -rf /var/lib/mysql",
"mkdir /var/lib/mysql chown 42434:42434 /var/lib/mysql chmod 0755 /var/lib/mysql chcon -t container_file_t /var/lib/mysql chcon -r object_r /var/lib/mysql chcon -u system_u /var/lib/mysql",
"podman images | grep mariadb <undercloud.ctlplane.example.com>:8787/rh-osbs/rhosp16-openstack-mariadb 16.2_20210322.1 <image_id> 3 weeks ago 718 MB",
"podman tag <image_id> mariadb",
"podman images | grep maria localhost/mariadb latest <image_id> 3 weeks ago 718 MB <undercloud.ctlplane.example.com>:8787/rh-osbs/rhosp16-openstack-mariadb 16.2_20210322.1 <image_id> 3 weeks ago 718 MB",
"podman run --net=host -v /var/lib/mysql:/var/lib/mysql localhost/mariadb mysql_install_db --datadir=/var/lib/mysql --user=mysql",
"chown: cannot access '/usr/lib64/mariadb/plugin/auth_pam_tool_dir/auth_pam_tool': No such file or directory Couldn't set an owner to '/usr/lib64/mariadb/plugin/auth_pam_tool_dir/auth_pam_tool'. It must be root, the PAM authentication plugin doesn't work otherwise.",
"cp /root/undercloud-all-databases.sql /var/lib/mysql",
"podman run --net=host -dt -v /var/lib/mysql:/var/lib/mysql localhost/mariadb /usr/libexec/mysqld",
"podman exec -it <container_id> /bin/bash ()[mysql@5a4e429c6f40 /]USD mysql -u root -e \"set global max_allowed_packet = 1073741824;\" ()[mysql@5a4e429c6f40 /]USD mysql -u root < /var/lib/mysql/undercloud-all-databases.sql ()[mysql@5a4e429c6f40 /]USD mysql -u root -e 'flush privileges' ()[mysql@5a4e429c6f40 /]USD exit exit",
"podman stop <container_id>",
"podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES",
"systemctl start multi-user.target"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/backing_up_and_restoring_the_undercloud_and_control_plane_nodes/assembly_backing-up-the-undercloud-and-the-control-plane-nodes-using-the-relax-and-recover-tool_br-undercloud-ctlplane |
Chapter 5. The sepolicy Suite | Chapter 5. The sepolicy Suite The sepolicy utility provides a suite of features to query the installed SELinux policy. These features are either new or were previously provided by separate utilities, such as sepolgen or setrans . The suite allows you to generate transition reports, man pages, or even new policy modules, thus giving users easier access and better understanding of the SELinux policy. The policycoreutils-devel package provides sepolicy . Enter the following command as the root user to install sepolicy : The sepolicy suite provides the following features that are invoked as command-line parameters: Table 5.1. The sepolicy Features Feature Description booleans Query the SELinux Policy to see description of Booleans communicate Query the SELinux policy to see if domains can communicate with each other generate Generate an SELinux policy module template gui Graphical User Interface for SELinux Policy interface List SELinux Policy interfaces manpage Generate SELinux man pages network Query SELinux policy network information transition Query SELinux policy and generate a process transition report 5.1. The sepolicy Python Bindings In versions of Red Hat Enterprise Linux, the setools package included the sesearch and seinfo utilities. The sesearch utility is used for searching rules in a SELinux policy while the seinfo utility allows you to query various other components in the policy. In Red Hat Enterprise Linux 7, Python bindings for sesearch and seinfo have been added so that you can use the functionality of these utilities through the sepolicy suite. See the example below: | [
"~]# yum install policycoreutils-devel",
"> python >>> import sepolicy >>> sepolicy.info(sepolicy.ATTRIBUTE) Returns a dictionary of all information about SELinux Attributes >>>sepolicy.search([sepolicy.ALLOW]) Returns a dictionary of all allow rules in the policy."
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-the_sepolicy_suite |
Chapter 7. Installing a cluster on OpenStack in a restricted network | Chapter 7. Installing a cluster on OpenStack in a restricted network In OpenShift Container Platform 4.14, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.14 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 7.2. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 7.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 7.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 7.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 7.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 7.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 7.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 7.7. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 7.8. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.14 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image. Decompress the image. Note You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example: Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 7.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example: platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 7.9.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note Kuryr installations default to HTTP proxies. Prerequisites For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter: USD ip route add <cluster_network_cidr> via <installer_subnet_gateway> The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.9.2. Sample customized install-config.yaml file for restricted OpenStack installations This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 7.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 7.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 7.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 7.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.13. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 7.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.15. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"openstack role add --user <user> --project <project> swiftoperator",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"file <name_of_downloaded_file>",
"openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"ip route add <cluster_network_cidr> via <installer_subnet_gateway>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_openstack/installing-openstack-installer-restricted |
10.2. About Host Entry Configuration Properties | 10.2. About Host Entry Configuration Properties A host entry can contain information about the host that is outside its system configuration, such as its physical location, its MAC address, and keys and certificates. This information can be set when the host entry is created if it is created manually; otherwise, most of that information needs to be added to the host entry after the host is enrolled in the domain. Table 10.1. Host Configuration Properties UI Field Command-Line Option Description Description --desc= description A description of the host. Locality --locality= locality The geographic location of the host. Location --location= location The physical location of the host, such as its data center rack. Platform --platform= string The host hardware or architecture. Operating system --os= string The operating system and version for the host. MAC address --macaddress= address The MAC address for the host. This is a multi-valued attribute. The MAC address is used by the NIS plug-in to create a NIS ethers map for the host. SSH public keys --sshpubkey= string The full SSH public key for the host. This is a multi-valued attribute, so multiple keys can be set. Principal name (not editable) --principalname= principal The Kerberos principal name for the host. This defaults to the hostname during the client installation, unless a different principal is explicitly set in the -p . This can be changed using the command-line tools, but cannot be changed in the UI. Set One-Time Password --password= string Sets a password for the host which can be used in bulk enrollment. - --random Generates a random password to be used in bulk enrollment. - --certificate= string A certificate blob for the host. - --updatedns An attribute switch which sets whether the host can dynamically update its DNS entries if its IP address changes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/host-attr |
Chapter 4. Planning your DNS services and host names | Chapter 4. Planning your DNS services and host names Identity Management (IdM) provides different types of DNS configurations in the IdM server. The following sections describe them and provide advice on how to determine which is the best for your use case. 4.1. DNS services available in an IdM server You can install an Identity Management (IdM) server with or without integrated DNS. Table 4.1. Comparing IdM with integrated DNS and without integrated DNS With integrated DNS Without integrated DNS Overview: IdM runs its own DNS service for the IdM domain. IdM uses DNS services provided by an external DNS server. Limitations: The integrated DNS server provided by IdM only supports features related to IdM deployment and maintenance. It does not support some of the advanced features of a general-purpose DNS server. Specific limitations are as follows: IdM DNS nameserver must be authoritative for its zones. The supported record types are A, AAAA, A6, AFSDB, CERT, CNAME, DLV, DNAME, DS, KX, LOC, MX, NAPTR, NS, PTR, SRV, SSHFP, TLSA, TXT, and URI. Split DNS, also known as split-view, split-horizon, split-brain DNS, is not supported. There are known issues if the DNS nameserver restarts in a multi-core environment. For example, if log rotation causes a nameserver to restart, the nameserver might crash. If you must use a multi-core setup, allow systemd to restart the nameserver after a failure occurs. DNS is not integrated with native IdM tools. For example, IdM does not update the DNS records automatically after a change in the topology. Works best for: Basic usage within the IdM deployment. When the IdM server manages DNS, DNS is tightly integrated with native IdM tools, which enables automating some of the DNS record management tasks. Environments where advanced DNS features beyond the scope of the IdM DNS are needed. Environments with a well-established DNS infrastructure where you want to keep using an external DNS server. Even if an Identity Management server is used as a primary DNS server, other external DNS servers can still be used as secondary servers. For example, if your environment is already using another DNS server, such as a DNS server integrated with Active Directory (AD), you can delegate only the IdM primary domain to the DNS integrated with IdM. It is not necessary to migrate DNS zones to the IdM DNS. Note If you need to issue certificates for IdM clients with an IP address in the Subject Alternative Name (SAN) extension, you must use the IdM integrated DNS service. 4.2. Guidelines for planning the DNS domain name and Kerberos realm name When installing the first Identity Management (IdM) server, the installation prompts for a primary DNS name of the IdM domain and Kerberos realm name. These guidelines can help you set the names correctly. Warning You will not be able to change the IdM primary domain name and Kerberos realm name after the server is already installed. Do not expect to be able to move from a testing environment to a production environment by changing the names, for example from lab.example.com to production.example.com . A separate DNS domain for service records Ensure that the primary DNS domain used for IdM is not shared with any other system. This helps avoid conflicts on the DNS level. Proper DNS domain name delegation Ensure you have valid delegation in the public DNS tree for the DNS domain. Do not use a domain name that is not delegated to you, not even on a private network. Multi-label DNS domain Do not use single-label domain names, for example .company . The IdM domain must be composed of one or more subdomains and a top level domain, for example example.com or company.example.com . A unique Kerberos realm name Ensure the realm name is not in conflict with any other existing Kerberos realm name, such as a name used by Active Directory (AD). Kerberos realm name as an upper-case version of the primary DNS name Consider setting the realm name to an upper-case ( EXAMPLE.COM ) version of the primary DNS domain name ( example.com ). Warning If you do not set the Kerberos realm name to be the upper-case version of the primary DNS name, you will not be able to use AD trusts. Additional notes on planning the DNS domain name and Kerberos realm name One IdM deployment always represents one Kerberos realm. You can join IdM clients from multiple distinct DNS domains ( example.com , example.net , example.org ) to a single Kerberos realm ( EXAMPLE.COM ). IdM clients do not need to be in the primary DNS domain. For example, if the IdM domain is idm.example.com , the clients can be in the clients.example.com domain, but clear mapping must be configured between the DNS domain and the Kerberos realm. Note The standard method to create the mapping is using the _kerberos TXT DNS records. The IdM integrated DNS adds these records automatically. Planning DNS forwarding If you want to use only one forwarder for your entire IdM deployment, configure a global forwarder . If your company is spread over multiple sites in geographically distant regions, global forwarders might be impractical. Configure per-server forwarders . If your company has an internal DNS network that is not resolvable from the public internet, configure a forward zone and zone forwarders so that the hosts in the IdM domain can resolve hosts from this other internal DNS network. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/planning_identity_management/planning-your-dns-services-and-host-names-planning-identity-management |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/developing_jakarta_enterprise_beans_applications/proc_providing-feedback-on-red-hat-documentation_jakarta-enterprise-beans |
5.6. Online Data Relocation | 5.6. Online Data Relocation You can move data while the system is in use with the pvmove command. The pvmove command breaks up the data to be moved into sections and creates a temporary mirror to move each section. For more information on the operation of the pvmove command, see the pvmove (8) man page. Note In order to perform a pvmove operation in a cluster, you should ensure that the cmirror package is installed and the cmirrord service is running. The following command moves all allocated space off the physical volume /dev/sdc1 to other free physical volumes in the volume group: The following command moves just the extents of the logical volume MyLV . Since the pvmove command can take a long time to execute, you may want to run the command in the background to avoid display of progress updates in the foreground. The following command moves all extents allocated to the physical volume /dev/sdc1 over to /dev/sdf1 in the background. The following command reports the progress of the move as a percentage at five second intervals. | [
"pvmove /dev/sdc1",
"pvmove -n MyLV /dev/sdc1",
"pvmove -b /dev/sdc1 /dev/sdf1",
"pvmove -i5 /dev/sdd1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/online_relocation |
Chapter 3. Eclipse Temurin features | Chapter 3. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 11 release of Eclipse Temurin includes, see OpenJDK 11.0.25 Released . Important Full support for Eclipse Temurin 11 ends on 31 October 2024. For more information, see Eclipse Temurin 11 - End of full support . New features and enhancements Eclipse Temurin 11.0.25 includes the following new features and enhancements. TLS_ECDH_* cipher suites are disabled by default The TLS Elliptic-curve Diffie-Hellman (TLS_ECDH) cipher suites do not preserve forward secrecy and they are rarely used. OpenJDK 11.0.25 disables the TLS_ECDH cipher suites by adding the ECDH option to the jdk.tls.disabledAlgorithms security property in the java.security configuration file. If you attempt to use the TLS_ECDH cipher suites, OpenJDK now throws an SSLHandshakeException error. If you want to continue using the TLS_ECDH cipher suites, you can remove ECDH from the jdk.tls.disabledAlgorithms security property either by modifying the java.security configuration file or by using the java.security.properties system property. Note Continued use of the TLS_ECDH cipher suites is at your own risk. ECDH cipher suites that use RC4 were disabled in an earlier release. This change does not affect the TLS_ECDHE cipher suites, which remain enabled by default. See JDK-8279164 (JDK Bug System) . Distrust of TLS server certificates issued after 11 November 2024 and anchored by Entrust root CAs In accordance with similar plans that Google and Mozilla recently announced, OpenJDK 11.0.25 distrusts TLS certificates that are issued after 11 November 2024 and anchored by Entrust root certificate authorities (CAs). This change in behavior includes any certificates that are branded as AffirmTrust, which are managed by Entrust. OpenJDK will continue to trust certificates that are issued on or before 11 November 2024 until these certificates expire. If a server's certificate chain is anchored by an affected certificate, any attempts to negotiate a TLS session now fail with an exception to indicate that the trust anchor is not trusted. For example: You can check whether this change affects a certificate in a JDK keystore by using the following keytool command: keytool -v -list -alias <your_server_alias> -keystore <your_keystore_filename> If this change affects any certificate in the chain, update this certificate or contact the organization that is responsible for managing the certificate. If you want to continue using TLS server certificates that are anchored by Entrust root certificates, you can remove ENTRUST_TLS from the jdk.security.caDistrustPolicies security property either by modifying the java.security configuration file or by using the java.security.properties system property. Note Continued use of the distrusted TLS server certificates is at your own risk. These restrictions apply to the following Entrust root certificates that OpenJDK includes: Certificate 1 Alias name: entrustevca [jdk] Distinguished name: CN=Entrust Root Certification Authority, OU=(c) 2006 Entrust, Inc., OU=www.entrust.net/CPS is incorporated by reference, O=Entrust, Inc., C=US SHA256: 73:C1:76:43:4F:1B:C6:D5:AD:F4:5B:0E:76:E7:27:28:7C:8D:E5:76:16:C1:E6:E6:14:1A:2B:2C:BC:7D:8E:4C Certificate 2 Alias name: entrustrootcaec1 [jdk] Distinguished name: CN=Entrust Root Certification Authority - EC1, OU=(c) 2012 Entrust, Inc. - for authorized use only, OU=See www.entrust.net/legal-terms, O=Entrust, Inc., C=US SHA256: 02:ED:0E:B2:8C:14:DA:45:16:5C:56:67:91:70:0D:64:51:D7:FB:56:F0:B2:AB:1D:3B:8E:B0:70:E5:6E:DF:F5 Certificate 3 Alias name: entrustrootcag2 [jdk] Distinguished name: CN=Entrust Root Certification Authority - G2, OU=(c) 2009 Entrust, Inc. - for authorized use only, OU=See www.entrust.net/legal-terms, O=Entrust, Inc., C=US SHA256: 43:DF:57:74:B0:3E:7F:EF:5F:E4:0D:93:1A:7B:ED:F1:BB:2E:6B:42:73:8C:4E:6D:38:41:10:3D:3A:A7:F3:39 Certificate 4 Alias name: entrustrootcag4 [jdk] Distinguished name: CN=Entrust Root Certification Authority - G4, OU=(c) 2015 Entrust, Inc. - for authorized use only, OU=See www.entrust.net/legal-termsO=Entrust, Inc., C=US SHA256: DB:35:17:D1:F6:73:2A:2D:5A:B9:7C:53:3E:C7:07:79:EE:32:70:A6:2F:B4:AC:42:38:37:24:60:E6:F0:1E:88 Certificate 5 Alias name: entrust2048ca [jdk] Distinguished name: CN=Entrust.net Certification Authority (2048), OU=(c) 1999 Entrust.net Limited, OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.), O=Entrust.net SHA256: 6D:C4:71:72:E0:1C:BC:B0:BF:62:58:0D:89:5F:E2:B8:AC:9A:D4:F8:73:80:1E:0C:10:B9:C8:37:D2:1E:B1:77 Certificate 6 Alias name: affirmtrustcommercialca [jdk] Distinguished name: CN=AffirmTrust Commercial, O=AffirmTrust, C=US SHA256: 03:76:AB:1D:54:C5:F9:80:3C:E4:B2:E2:01:A0:EE:7E:EF:7B:57:B6:36:E8:A9:3C:9B:8D:48:60:C9:6F:5F:A7 Certificate 7 Alias name: affirmtrustnetworkingca [jdk] Distinguished name: CN=AffirmTrust Networking, O=AffirmTrust, C=US SHA256: 0A:81:EC:5A:92:97:77:F1:45:90:4A:F3:8D:5D:50:9F:66:B5:E2:C5:8F:CD:B5:31:05:8B:0E:17:F3:F0B4:1B Certificate 8 Alias name: affirmtrustpremiumca [jdk] Distinguished name: CN=AffirmTrust Premium, O=AffirmTrust, C=US SHA256: 70:A7:3F:7F:37:6B:60:07:42:48:90:45:34:B1:14:82:D5:BF:0E:69:8E:CC:49:8D:F5:25:77:EB:F2:E9:3B:9A Certificate 9 Alias name: affirmtrustpremiumeccca [jdk] Distinguished name: CN=AffirmTrust Premium ECCO=AffirmTrust, C=US SHA256: BD:71:FD:F6:DA:97:E4:CF:62:D1:64:7A:DD:25:81:B0:7D:79:AD:F8:39:7E:B4:EC:BA:9C:5E:84:88:82:14:23 See JDK-8337664 (JDK Bug System) and JDK-8341059 (JDK Bug System) . Reduced verbose locale output in -XshowSettings launcher option In earlier releases, the -XshowSettings launcher option printed a long list of available locales, which obscured other settings. In OpenJDK 11.0.25, the -XshowSettings launcher option no longer prints the list of available locales by default. If you want to view all settings that relate to the available locales, you can use the -XshowSettings:locale option. See JDK-8310201 (JDK Bug System) . SSL.com root certificates added In OpenJDK 11.0.25, the cacerts truststore includes two SSL.com TLS root certificates: Certificate 1 Name: SSL.com Alias name: ssltlsrootecc2022 Distinguished name: CN=SSL.com TLS ECC Root CA 2022, O=SSL Corporation, C=US Certificate 2 Name: SSL.com Alias name: ssltlsrootrsa2022 Distinguished name: CN=SSL.com TLS RSA Root CA 2022, O=SSL Corporation, C=US See JDK-8341057 (JDK Bug System) . Relaxation of Java Abstract Window Toolkit (AWT) Robot specification OpenJDK 11.0.25 is based on the latest maintenance release of the Java 11 specification. This release relaxes the specification of the following three methods in the java.awt.Robot class: mouseMove(int,int) getPixelColor(int,int) createScreenCapture(Rectangle) This relaxation of the specification allows these methods to fail when the desktop environment does not permit moving the mouse pointer or capturing screen content. See JDK-8307779 (JDK Bug System) . Changes to com.sun.jndi.ldap.object.trustSerialData system property The JDK implementation of the LDAP provider no longer supports the deserialization of Java objects by default. In OpenJDK 11.0.25, the com.sun.jndi.ldap.object.trustSerialData system property is set to false by default. This release also increases the scope of the com.sun.jndi.ldap.object.trustSerialData property to cover the reconstruction of RMI remote objects from the javaRemoteLocation LDAP attribute. These changes mean that transparent deserialization of Java objects now requires an explicit opt-in. From OpenJDK 11.0.25 onward, if you want to allow applications to reconstruct Java objects and RMI stubs from LDAP attributes, you must explicitly set the com.sun.jndi.ldap.object.trustSerialData property to true . See JDK-8290367 (JDK Bug System) and JDK bug system reference ID: JDK-8332643. HTTP client enhancements OpenJDK 11.0.25 limits the maximum header field size that the HTTP client accepts within the JDK for all supported versions of the HTTP protocol. The header field size is computed as the sum of the size of the uncompressed header name, the size of the uncompressed header value, and an overhead of 32 bytes for each field section line. If a peer sends a field section that exceeds this limit, a java.net.ProtocolException is raised. OpenJDK 11.0.25 introduces a jdk.http.maxHeaderSize system property that you can use to change the maximum header field size (in bytes). Alternatively, you can disable the maximum header field size by setting the jdk.http.maxHeaderSize property to zero or a negative value. The jdk.http.maxHeaderSize property is set to 393,216 bytes (that is, 384KB) by default. JDK bug system reference ID: JDK-8328286 ClassLoadingMXBean and MemoryMXBean APIs have isVerbose() methods consistent with their setVerbose() methods The setVerbose(boolean enabled) method for the ClassLoadingMXBean API displays the following behavior: If enabled is true , the setVerbose method sets class+load* logging on standard output (stdout) at the Info level. If enabled is false , the setVerbose method disables class+load* logging on stdout. In earlier releases, the isVerbose() method for the ClassLoadingMXBean API checked if class+load logging was enabled at the Info level on any type of log output, not just stdout. In this situation, if you enabled logging to a file by using the java -Xlog option, the isVerbose() method returned true even if setVerbose(false) was called, which resulted in counterintuitive behavior. The isVerbose() method for the MemoryMXBean API also displayed similar counterintuitive behavior. From OpenJDK 11.0.25 onward, the ClassLoadingMXBean.isVerbose() and MemoryMXBean.isVerbose() methods display the following behavior: ClassLoadingMXBean::isVerbose() returns true only if class+load* logging is enabled at the Info level (or higher) specifically on standard output (stdout). MemoryMXBean::isVerbose() returns true only if garbage collector logging is enabled at the Info level (or higher) on stdout. See JDK-8338139 (JDK Bug System) . Revised on 2024-10-30 10:44:14 UTC | [
"TLS server certificate issued after 2024-11-11 and anchored by a distrusted legacy Entrust root CA: CN=Entrust.net CertificationAuthority (2048), OU=(c) 1999 Entrust.net Limited,OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.),O=Entrust.net"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.25/openjdk-temurin-features-11-0-23_openjdk |
Chapter 5. Upgrading a geo-replication deployment of Red Hat Quay on OpenShift Container Platform | Chapter 5. Upgrading a geo-replication deployment of Red Hat Quay on OpenShift Container Platform Use the following procedure to upgrade your geo-replicated Red Hat Quay on OpenShift Container Platform deployment. Important When upgrading geo-replicated Red Hat Quay on OpenShift Container Platform deployment to the y-stream release (for example, Red Hat Quay 3.7 Red Hat Quay 3.8), you must stop operations before upgrading. There is intermittent downtime down upgrading from one y-stream release to the . It is highly recommended to back up your Red Hat Quay on OpenShift Container Platform deployment before upgrading. Procedure This procedure assumes that you are running the Red Hat Quay registry on three or more systems. For this procedure, we will assume three systems named System A, System B, and System C . System A will serve as the primary system in which the Red Hat Quay Operator is deployed. On System B and System C, scale down your Red Hat Quay registry. This is done by disabling auto scaling and overriding the replica county for Red Hat Quay, mirror workers, and Clair if it is managed. Use the following quayregistry.yaml file as a reference: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ... 1 Disable auto scaling of Quay , Clair and Mirroring workers 2 Set the replica count to 0 for components accessing the database and objectstorage Note You must keep the Red Hat Quay registry running on System A. Do not update the quayregistry.yaml file on System A. Wait for the registry-quay-app , registry-quay-mirror , and registry-clair-app pods to disappear. Enter the following command to check their status: oc get pods -n <quay-namespace> Example output quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m On System A, initiate a Red Hat Quay upgrade to the latest y-stream version. This is a manual process. For more information about upgrading installed Operators, see Upgrading installed Operators . For more information about Red Hat Quay upgrade paths, see Upgrading the Red Hat Quay Operator . After the new Red Hat Quay registry is installed, the necessary upgrades on the cluster are automatically completed. Afterwards, new Red Hat Quay pods are started with the latest y-stream version. Additionally, new Quay pods are scheduled and started. Confirm that the update has properly worked by navigating to the Red Hat Quay UI: In the OpenShift console, navigate to Operators Installed Operators , and click the Registry Endpoint link. Important Do not execute the following step until the Red Hat Quay UI is available. Do not upgrade the Red Hat Quay registry on System B and on System C until the UI is available on System A. Confirm that the update has properly worked on System A, initiate the Red Hat Quay upgrade on System B and on System C. The Operator upgrade results in an upgraded Red Hat Quay installation, and the pods are restarted. Note Because the database schema is correct for the new y-stream installation, the new pods on System B and on System C should quickly start. After updating, revert the changes made in step 1 of this procedure by removing overrides for the components. For example: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: true 1 - kind: quay managed: true - kind: clair managed: true - kind: mirror managed: true ... 1 If the horizontalpodautoscaler resource was set to true before the upgrade procedure, or if you want Red Hat Quay to scale in case of a resource shortage, set it to true . | [
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ...",
"get pods -n <quay-namespace>",
"quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: true 1 - kind: quay managed: true - kind: clair managed: true - kind: mirror managed: true ..."
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/upgrade_red_hat_quay/upgrading-geo-repl-quay-operator |
2.8. Active Directory Authentication Using Kerberos (GSSAPI) | 2.8. Active Directory Authentication Using Kerberos (GSSAPI) When using Red Hat JBoss Data Grid with Microsoft Active Directory, data security can be enabled via Kerberos authentication. To configure Kerberos authentication for Microsoft Active Directory, use the following procedure. Procedure 2.6. Configure Kerberos Authentication for Active Directory (Library Mode) Configure JBoss EAP server to authenticate itself to Kerberos. This can be done by configuring a dedicated security domain, for example: The security domain for authentication must be configured correctly for JBoss EAP, an application must have a valid Kerberos ticket. To initiate the Kerberos ticket, you must reference another security domain using . This points to the standard Kerberos login module described in Step 3. The security domain authentication configuration described in the step points to the following standard Kerberos login module: Report a bug | [
"<security-domain name=\"ldap-service\" cache-type=\"default\"> <authentication> <login-module code=\"Kerberos\" flag=\"required\"> <module-option name=\"storeKey\" value=\"true\"/> <module-option name=\"useKeyTab\" value=\"true\"/> <module-option name=\"refreshKrb5Config\" value=\"true\"/> <module-option name=\"principal\" value=\"ldap/[email protected]\"/> <module-option name=\"keyTab\" value=\"USD{basedir}/keytab/ldap.keytab\"/> <module-option name=\"doNotPrompt\" value=\"true\"/> </login-module> </authentication> </security-domain>",
"<module-option name=\"usernamePasswordDomain\" value=\"krb-admin\"/>",
"<security-domain name=\"ispn-admin\" cache-type=\"default\"> <authentication> <login-module code=\"SPNEGO\" flag=\"requisite\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> <module-option name=\"serverSecurityDomain\" value=\"ldap-service\"/> <module-option name=\"usernamePasswordDomain\" value=\"krb-admin\"/> </login-module> <login-module code=\"AdvancedAdLdap\" flag=\"required\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> <module-option name=\"bindAuthentication\" value=\"GSSAPI\"/> <module-option name=\"jaasSecurityDomain\" value=\"ldap-service\"/> <module-option name=\"java.naming.provider.url\" value=\"ldap://localhost:389\"/> <module-option name=\"baseCtxDN\" value=\"ou=People,dc=infinispan,dc=org\"/> <module-option name=\"baseFilter\" value=\"(krb5PrincipalName={0})\"/> <module-option name=\"rolesCtxDN\" value=\"ou=Roles,dc=infinispan,dc=org\"/> <module-option name=\"roleFilter\" value=\"(member={1})\"/> <module-option name=\"roleAttributeID\" value=\"cn\"/> </login-module> </authentication> </security-domain>",
"<security-domain name=\"krb-admin\" cache-type=\"default\"> <authentication> <login-module code=\"Kerberos\" flag=\"required\"> <module-option name=\"useKeyTab\" value=\"true\"/> <module-option name=\"principal\" value=\"[email protected]\"/> <module-option name=\"keyTab\" value=\"USD{basedir}/keytab/admin.keytab\"/> </login-module> </authentication> </security-domain>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/active_directory_authentication_using_kerberos_gssapi |
C.9. DistributionManager | C.9. DistributionManager org.infinispan.distribution.DistributionManagerImpl The DistributionManager component handles the distribution of content across a cluster. Note The DistrubutionManager component is only available in clustered mode. Table C.16. Operations Name Description Signature isAffectedByRehash Determines whether a given key is affected by an ongoing rehash. boolean isAffectedByRehash(Object p0) isLocatedLocally Indicates whether a given key is local to this instance of the cache. Only works with String keys. boolean isLocatedLocally(String p0) locateKey Locates an object in a cluster. Only works with String keys. List locateKey(String p0) 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/distributionmanager |
21.14. virt-sparsify: Reclaiming Empty Disk Space | 21.14. virt-sparsify: Reclaiming Empty Disk Space The virt-sparsify command-line tool can be used to make a virtual machine disk (or any disk image) sparse. This is also known as thin-provisioning. Free disk space on the disk image is converted to free space on the host. The virt-sparsify command can work with most filesystems, such as ext2, ext3, ext4, btrfs, NTFS. It also works with LVM physical volumes. virt-sparsify can operate on any disk image, not just virtual machine disk images. Warning Using virt-sparsify on live virtual machines, or concurrently with other disk editing tools can cause disk corruption. The virtual machine must be shut down before using this command. In addition, disk images should not be edited concurrently. The command can also be used to convert between some disk formats. For example, virt-sparsify can convert a raw disk image to a thin-provisioned qcow2 image. Note If a virtual machine has multiple disks and uses volume management, virt-sparsify will work, but it will not be very effective. If the input is raw , then the default output is raw sparse . The size of the output image must be checked using a tool that understands sparseness. Note that the ls command shows the image size to be 100M. However, the du command correctly shows the image size to be 3.6M. Important limitations The following is a list of important limitations: The virtual machine must be shutdown before using virt-sparsify . In a worst case scenario, virt-sparsify may require up to twice the virtual size of the source disk image. One for the temporary copy and one for the destination image. If you use the --in-place option, large amounts of temporary space are not needed. virt-sparsify cannot be used to resize disk images. To resize disk images, use virt-resize . For information about virt-resize , see Section 21.8, "virt-resize: Resizing Guest Virtual Machines Offline" . virt-sparsify does not work with encrypted disks, because encrypted disks cannot be sparsified. virt-sparsify cannot sparsify the space between partitions. This space is often used for critical items like bootloaders, so it is not really unused space. In copy mode, qcow2 internal snapshots are not copied to the destination image. Examples To install virt-sparsify , run one of the following commands: # yum install /usr/bin/virt-sparsify or # yum install libguestfs-tools-c To sparsify a disk: # virt-sparsify /dev/sda1 /dev/device Copies the contents of /dev/sda1 to /dev/device , making the output sparse. If /dev/device already exists, it is overwritten. The format of /dev/sda1 is detected and used as the format for /dev/device . To convert between formats: # virt-sparsify disk .raw --convert qcow2 disk .qcow2 Tries to zero and sparsify free space on every filesystem it can find within the source disk image. To prevent free space from being overwritten with zeros on certain filesystems: # virt-sparsify --ignore /dev/device /dev/sda1 /dev/device Creates sparsified disk images from all filesystems in the disk image, without overwriting free space on the filesystems with zeros. To make a disk image sparse without creating a temporary copy: # virt-sparsify --in-place disk .img Makes the specified disk image sparse, overwriting the image file. virt-sparsify options The following command options are available to use with virt-sparsify : Table 21.4. virt-sparsify options Command Description Example --help Displays a brief help entry about a particular command or about the virt-sparsify utility. For additional help, see the virt-sparsify man page. virt-sparsify --help --check-tmpdir ignore | continue | warn | fail Estimates if tmpdir has enough space to complete the operation. The specified option determines the behavior if there is not enough space to complete the operation. ignore : The issue is ignored and the operation continues. continue : Reports an error and the operation continues. warn : Reports an error and waits for the user to press Enter. fail : Reports an error and aborts the operation. This option cannot be used with the ‐‐in-place option. virt-sparsify --check-tmpdir ignore /dev/sda1 /dev/device virt-sparsify --check-tmpdir continue /dev/sda1 /dev/device virt-sparsify --check-tmpdir warn /dev/sda1 /dev/device virt-sparsify --check-tmpdir fail /dev/sda1 /dev/device --compress Compresses the output file. This only works if the output format is qcow2. This option cannot be used with the ‐‐in-place option. virt-sparsify --compress /dev/sda1 /dev/device --convert Creates the sparse image using a specified format. If no format is specified, the input format is used. The following output formats are supported and known to work: raw, qcow, vdi You can use any format supported by the QEMU emulator. It is recommended that you use the --convert option. This way, virt-sparsify does not need to guess the input format. This option cannot be used with the ‐‐in-place option. virt-sparsify --convert raw /dev/sda1 /dev/device virt-sparsify --convert qcow2 /dev/sda1 /dev/device virt-sparsify --convert other_format indisk outdisk --format Specifies the format of the input disk image. If not specified, the format is detected from the image. When working with untrusted raw-format guest disk images, ensure to specify the format. virt-sparsify --format raw /dev/sda1 /dev/device virt-sparsify --format qcow2 /dev/sda1 /dev/device --ignore Ignores the specified file system or volume group. When a filesystem is specified and the --in-place option is not specified, free space on the filesystem is not zeroed. However, existing blocks of zeroes are sparsified. When the ‐‐in-place option is specified, the filesystem is completely ignored. When a volume group is specified, the volume group is ignored. The volume group name should be used without the /dev/ prefix. For example, ‐‐ignore vg_foo The --ignore option can be included in the command multiple times. virt-sparsify --ignore filesystem1 /dev/sda1 /dev/device virt-sparsify --ignore volume_group /dev/sda1 /dev/device --in-place Makes an image sparse in-place, instead of making a temporary copy. Although in-place sparsification is more efficient than copying sparsification, it cannot recover quite as much disk space as copying sparsification. In-place sparsification works using discard (also known as trim or unmap) support. To use in-place sparsification, specify a disk image that will be sparsified in-place. When specifying in-place sparsification, the following options cannot be used: --convert and --compress , because they require wholesale disk format changes. --check-tmpdir , because large amounts of temporary space are not required. virt-sparsify --in-place disk.img -x Enables tracing of libguestfs API calls. virt-sparsify -x filesystem1 /dev/sda1 /dev/device For more information, including additional options, see libguestfs.org . | [
"ls -lh test1.img -rw-rw-r--. 1 rjones rjones 100M Aug 8 08:08 test1.img du -sh test1.img 3.6M test1.img"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/virt-sparsify |
Chapter 3. Diagnosing performance issues | Chapter 3. Diagnosing performance issues 3.1. Enabling garbage collection logging Examining garbage collection logs can be useful when attempting to troubleshoot Java performance issues, especially those related to memory usage. Other than some additional disk I/O activity for writing the log files, enabling garbage collection logging does not significantly affect server performance. Garbage collection logging is already enabled by default for a standalone JBoss EAP server running on OpenJDK or Oracle JDK. For a JBoss EAP managed domain, garbage collection logging can be enabled for the host controller, process controller, or individual JBoss EAP servers. Get the correct JVM options for enabling garbage collection logging for your JDK. Replace the path in the options below to where you want the log to be created. Note The Red Hat Customer Portal has a JVM Options Configuration Tool that can help you generate optimal JVM settings. For OpenJDK 11 or Oracle JDK 11: For versions 9 or later of OpenJDK, Oracle JDK, or any JDK that supports JEP 271: Additional resources For more information about JEP 271, see JEP 271: Unified GC Logging on the OpenJDK web page. 3.2. Java heap dumps A Java heap dump is a snapshot of a JVM heap created at a certain point in time. Creating and analyzing heap dumps can be useful for diagnosing and troubleshooting issues with Java applications. Depending on which JDK you are using, there are different ways of creating and analyzing a Java heap dump for a JBoss EAP process. This section covers common methods for Oracle JDK and OpenJDK. 3.2.1. Creating a heap dump using OpenJDK and Oracle JDK 3.2.1.1. Create an on-demand heap dump You can use the jcmd command to create an on-demand heap dump for JBoss EAP running on OpenJDK or Oracle JDK. Procedure Determine the process ID of the JVM that you want to create a heap dump from. Create the heap dump with the following command: This creates a heap dump file in the HPROF format, usually located in EAP_HOME or EAP_HOME /bin . Alternatively, you can specify a file path to another directory. 3.2.1.2. Create a heap dump automatically on OutOfMemoryError You can use the -XX:+HeapDumpOnOutOfMemoryError JVM option to automatically create a heap dump when an OutOfMemoryError exception is thrown. This creates a heap dump file in the HPROF format, usually located in EAP_HOME or EAP_HOME /bin . Alternatively, you can set a custom path for the heap dump using -XX:HeapDumpPath= /path/ . If you specify a file name using -XX:HeapDumpPath , for example, -XX:HeapDumpPath= /path/filename.hprof , the heap dumps will overwrite each other. 3.2.2. Analyzing a heap dump 3.2.2.1. Heap dump analysis tools There are many tools that can analyze heap dump files and help identify issues. Red Hat Support recommends using the Eclipse Memory Analyzer tool (MAT) , which can analyze heap dumps formatted in either HPROF or PHD formats. Additional resources For information on using Eclipse MAT, see the Eclipse MAT documentation . 3.2.2.2. Heap dump analysis tips Sometimes the cause of the heap performance issues are obvious, but other times you may need an understanding of your application's code and the specific circumstances that cause issues like an OutOfMemoryError . This can help to identify whether an issue is a memory leak, or if the heap is just not large enough. Some suggestions for identifying memory usage issues include: If a single object is not found to be consuming too much memory, try grouping by class to see if many small objects are consuming a lot of memory. Check if the biggest usage of memory is a thread. A good indicator of this is if the OutOfMemoryError -triggered heap dump is much smaller than the specified Xmx maximum heap size. A technique to make memory leaks more detectable is to temporarily double the normal maximum heap size. When an OutOfMemoryError occurs, the size of the objects related to the memory leak will be about half the size of the heap. When the source of a memory issue is identified, you can view the paths from garbage collection roots to see what is keeping the objects alive. 3.3. Java Flight Recorder 3.3.1. About Java Flight Recorder The Oracle JDK Mission Control user guide describes Java Flight Recorder (JFR) as a "profiling and event collection framework". Developers can use JFR with JDK Mission Control (JMC) to collect data about Java Virtual Machines (JVMs) and other Java applications. Developers can use this data to identify and fix performance issues. JFR has been carefully designed so that it requires a low level of overhead (consumption of resources). This means that JFR profiling can run continuously in certain production environments with minimal impact. Developers can use JFR and JMC to quickly analyze runtime information after an incident. Note JFR is available from Java OpenJDK 8u262 or later as part of the Java Diagnostic Command Tool. Additional resources For more information about JFR, see section Flight Recorder in the Oracle JDK Mission Control user guide . 3.3.2. Java Flight Recorder profiling configurations Developers can modify the profiling configurations to customize their instance of Java Flight Recorder (JFR). Two different profiling configurations are available with JFR: default : provides a sparse sampling of information; low profiling detail profile : provides a more comprehensive sampling of information; medium profiling detail Developers can modify either configuration files to enable additional event metrics sampling. 3.3.3. Enable Java Flight Recorder profile capture Developers can use Java Flight Recorder (JFR) to profile a JBoss EAP installation on bare metal or using Red Hat OpenShift Container Platform. To learn about using JFR with OpenShift, see Introduction to Cryostat: JDK Flight Recorder for containers . 3.3.3.1. Enable Java Flight Recorder profiling on bare metal Developers can start a Java Flight Recorder (JFR) profile using the command line or the Java Management Extensions (JMX) with the Java Mission Control (JMC) desktop application. 3.3.3.2. Configure Java Flight Recorder profiling for JBoss EAP using Java Virtual Machine configuration flags You can use configuration flags to configure Java Flight Recorder (JFR) profiling with JBoss EAP on Java Virtual Machines (JVMs). Example JVM configuration The StartFlightRecording=delay configuration flag allows you to set the amount of time JFR waits after the JVM boots before starting a profiling session. In the preceding example, StartFlightRecording=delay is set to 15 seconds, which means profiling will start after a 15 second delay. The duration configuration flag allows you to set the length of time for each profiling session. In the preceding example, duration is set to 60 seconds. The name configuration flag allows you to set the in memory profile name. In this example, the in memory profile name is set to jboss-eap-profile . The filename configuration flag allows you to set the filename and path where you would like the file to be saved. In this example, filename is set to C:\TEMP\jboss-eap-profile.jfr . The settings configuration flag allows you to select a profiling configuration. In this example, settings is set to default . Note that the file extension for the profiling configuration is excluded. After the profiling session is complete, a file will be created at the file path defined by the filename option. 3.3.3.3. Profiling using Java command tool for a running JBoss EAP Java Virtual Machine You can use the Java Flight Recorder (JFR) JFR.start command to configure a running JBoss EAP Java Virtual Machine (JVM) for profiling using the Java command tool, jcmd . Procedure Use one of the following commands: For a Linux operating system: For example: JFR.start command for Linux example Once a JFR profiling session starts, you will receive the following confirmation message: For a Windows operating system: For example: JFR.start command for Windows example Once a JFR profiling session starts, you will receive the following confirmation message: The duration option allows you to set the length of time for each profiling session. In the preceding example commands, duration is set to 60 seconds. The filename option allows you to set the filename and path where you would like the file to be saved. In the preceding example commands, filename is set to /tmp/jboss-eap-profile.jfr in the Linux example, and C:\TEMP\jboss-eap-profile.jfr in the Windows example. 3.3.3.4. Connect a local Java Virtual Machine using Java Mission Control You can use Java Mission Control (JMC) to connect a local Java Virtual Machine (JVM) running on the same server as your instance of JMC. Prerequisites Java Mission Control with JBoss EAP libraries is configured. See How to connect Java Mission Control with EAP remotely? for instructions. JBoss EAP is configured for remote monitoring connections, and a user is created in the ApplicationRealm for monitoring. Procedure Open Java Mission Control. In the JVM Browser pane, select the JVM to profile. Expand the dropdown menu for the JVM to reveal the Flight Recorder item. Right click Flight Recorder to open the sub-menu, select Start Flight Recording... . Figure 3.1. JVM Browser in JMC In the Start Flight Recording window configure options for profiling. Figure 3.2. JVM profiling settings Click for detailed low level settings. Figure 3.3. JVM profiling advanced settings Click Finish to start profiling. 3.3.3.5. Connect a remote Java Virtual Machine using Java Mission Control You can use Java Mission Control (JMC) to connect to a remote Java Virtual Machine (JVM) profile. Prerequisites Configure Java Mission Control with JBoss EAP libraries. See How to connect Java Mission Control with EAP remotely? for instructions. Configure JBoss EAP for remote monitoring connections, with a user created in the ApplicationRealm for monitoring. Procedure Open Java Mission Control. In the File menu, select Connect . In the Connect window, select Create a new connection , then click . Figure 3.4. Connect window in JMC In the JVM Connection window, complete the details for your remote JBoss EAP JVM to be profiled. Figure 3.5. JVM connection details in JMC In the Host field, add your hostname or an IP address. In the Port field, add your port number. In the User field, add the user that you created in the ApplicationRealm . In the Password field, add your password created in the ApplicationRealm . Optional To store credentials in a settings file, click the checkbox to Store credentials in settings file . Click Custom JMX Service URL to override the default setting. Figure 3.6. JMX Service URL for JVM connection Change the JMX service URL to define the JBoss remoting protocol. Click Test connection to verify your settings. Click Finish to save your settings. A JMXRMI Preferences not set warning message will appear. Figure 3.7. JMXRMI preferences warning message Click OK to accept the connection attempt. In the JVM Browser pane, select the JVM to profile. Expand the dropdown menu for the JVM to reveal the Flight Recorder item. Right click Flight Recorder to open the sub-menu, then select Start Flight Recording... . Figure 3.8. Connect a remote JVM using JMC profile menu In the Start Flight Recording window configure options for profiling. Figure 3.9. JVM profiling settings Click for detailed low level settings. Figure 3.10. JVM profiling advanced settings Click Finish to start profiling. 3.4. Identifying high CPU utilization by Java threads Note For customers using JBoss EAP on Red Hat Enterprise Linux or Solaris, the JVMPeg lab tool on the Red Hat Customer Portal helps collect and analyze Java thread information to identify high CPU utilization. Follow the instructions for using the JVMPeg lab tool instead of using the following procedure. For OpenJDK and Oracle JDK environments, Java thread diagnostic information is available using the jstack utility. Identify the process ID of the Java process that is utilizing a high percentage of the CPU. It can also be useful to obtain per-thread CPU data on high-usage processes. This can be done using the top -H command on Red Hat Enterprise Linux systems. Using the jstack utility, create a stack dump of the Java process. For example, on Linux and Solaris: You might need to create multiple dumps at intervals to see any changes or trends over a period of time. Analyze the stack dumps. You can use a tool such as the Thread Dump Analyzer (TDA) . 3.5. Runtime statistics for managed executor services and managed scheduled executor services You can monitor the performance of managed executor services and managed scheduled executor services by viewing the runtime statistics generated with the management CLI attributes. You can view the runtime statistics for a standalone server or for an individual server mapped to a host. Important The domain.xml configuration does not include a resource for the runtime statistic management CLI attributes, so you cannot use the management CLI attributes to view the runtime statistics for a managed domain. Table 3.1. Displays management CLI attributes for monitoring the performance of managed executor services and of managed scheduled executor services. Attribute Description active-thread-count The approximate number of threads that are actively executing tasks. completed-task-count The approximate total number of tasks that have completed execution. hung-thread-count The number of executor threads that are hung. max-thread-count The largest number of executor threads. current-queue-size The current size of the executor's task queue. task-count The approximate total number of tasks that have been submitted for execution. thread-count The current number of executor threads. Example of viewing the runtime statistics for a managed executor service running on a standalone server. Example of the runtime statistics for a managed scheduled executor service running on a standalone server. Example of viewing the runtime statistics for a managed executor service running on a server mapped to a host. Example of the runtime statistics for a managed scheduled executor service running on a server mapped to a host. | [
"-verbose:gc -Xloggc: <path_to_directory> /gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading",
"-Xlog:gc*:file= <path_to_directory> /gc.log:time,uptimemillis:filecount=5,filesize=3M",
"jcmd JAVA_PID GC.heap_dump -all=true FILENAME .hprof",
"-XX:StartFlightRecording=delay=15s,duration=60s,name=jboss-eap-profile, filename=C:\\TEMP\\jboss-eap-profile.jfr,settings=default",
"jcmd <PID> JFR.start duration= TIME filename= path/to/YOUR_PROFILE_NAME .jfr",
"jcmd <PID> JFR.start duration=60s filename=/tmp/jboss-eap-profile.jfr",
"jcmd <PID> JFR.start duration=60s filename=/tmp/jboss-eap-profile.jfr <PID>: Started recording 1. The result will be written to: /tmp/jboss-eap-profile.jfr",
"> jcmd.exe <PID> JFR.start duration= TIME filename= path/to/YOUR_PROFILE_NAME .jfr",
"> jcmd.exe <PID> JFR.start duration=60s filename=C:\\TEMP\\jboss-eap-profile.jfr",
"> jcmd.exe <PID> JFR.start duration=60s filename=C:\\TEMP\\jboss-eap-profile.jfr <PID>: Started recording 1. The result will be written to: C:\\TEMP\\jboss-eap-profile.jfr",
"service:jmx:remote+http://<host>:9990",
"jstack -l JAVA_PROCESS_ID > high-cpu-tdump.out",
"[standalone@localhost:9990 /] /subsystem=ee/managed-executor-service=default:read-resource(include-runtime=true,recursive=true)",
"[standalone@localhost:9990 /] /subsystem=ee/managed-scheduled-executor-service=default:read-resource(include-runtime=true,recursive=true)",
"[domain@localhost:9990 /] /host= <host_name> /server= <server_name> /subsystem=ee/managed-executor-service=default:read-resource(include-runtime=true,recursive=true)",
"[domain@localhost:9990 /] /host= <host_name> /server= <server_name> /subsystem=ee/managed-scheduled-executor-service=default:read-resource(include-runtime=true,recursive=true)"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/performance_tuning_for_red_hat_jboss_enterprise_application_platform/assembly-diagnosing-performance-issues_performance-tuning-guide |
Chapter 5. Configuring Satellite Server with external services | Chapter 5. Configuring Satellite Server with external services If you do not want to configure the DNS, DHCP, and TFTP services on Satellite Server, use this section to configure your Satellite Server to work with external DNS, DHCP, and TFTP services. 5.1. Configuring Satellite Server with external DNS You can configure Satellite Server with external DNS. Satellite Server uses the nsupdate utility to update DNS records on the remote server. To make any changes persistent, you must enter the satellite-installer command with the options appropriate for your environment. Prerequisites You must have a configured external DNS server. This guide assumes you have an existing installation. Procedure Copy the /etc/rndc.key file from the external DNS server to Satellite Server: Configure the ownership, permissions, and SELinux context: To test the nsupdate utility, add a host remotely: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dns.yml file: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the DNS service with the appropriate subnets and domain. 5.2. Configuring Satellite Server with external DHCP To configure Satellite Server with external DHCP, you must complete the following procedures: Section 5.2.1, "Configuring an external DHCP server to use with Satellite Server" Section 5.2.2, "Configuring Satellite Server with an external DHCP server" 5.2.1. Configuring an external DHCP server to use with Satellite Server To configure an external DHCP server running Red Hat Enterprise Linux to use with Satellite Server, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages. You must also share the DHCP configuration and lease files with Satellite Server. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files. Note If you use dnsmasq as an external DHCP server, enable the dhcp-no-override setting. This is required because Satellite creates configuration files on the TFTP server under the grub2/ subdirectory. If the dhcp-no-override setting is disabled, hosts fetch the bootloader and its configuration from the root directory, which might cause an error. Procedure On your Red Hat Enterprise Linux host, install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages: Generate a security token: Edit the dhcpd configuration file for all subnets and add the key generated by tsig-keygen . The following is an example: Note that the option routers value is the IP address of your Satellite Server or Capsule Server that you want to use with an external DHCP service. On Satellite Server, define each subnet. Do not set DHCP Capsule for the defined Subnet yet. To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the Satellite web UI define the reservation range as 192.168.38.101 to 192.168.38.250. Configure the firewall for external access to the DHCP server: Make the changes persistent: On Satellite Server, determine the UID and GID of the foreman user: On the DHCP server, create the foreman user and group with the same IDs as determined in a step: To ensure that the configuration files are accessible, restore the read and execute flags: Enable and start the DHCP service: Export the DHCP configuration and lease files using NFS: Create directories for the DHCP configuration and lease files that you want to export using NFS: To create mount points for the created directories, add the following line to the /etc/fstab file: Mount the file systems in /etc/fstab : Ensure the following lines are present in /etc/exports : Note that the IP address that you enter is the Satellite or Capsule IP address that you want to use with an external DHCP service. Reload the NFS server: Configure the firewall for DHCP omapi port 7911: Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3. Make the changes persistent: 5.2.2. Configuring Satellite Server with an external DHCP server You can configure Satellite Server with an external DHCP server. Prerequisites Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with Satellite Server. For more information, see Section 5.2.1, "Configuring an external DHCP server to use with Satellite Server" . Procedure Install the nfs-utils package: Create the DHCP directories for NFS: Change the file owner: Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths: Add the following lines to the /etc/fstab file: Mount the file systems on /etc/fstab : To verify that the foreman-proxy user can access the files that are shared over the network, display the DHCP configuration and lease files: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dhcp.yml file: Associate the DHCP service with the appropriate subnets and domain. 5.3. Using Infoblox as DHCP and DNS providers You can use Satellite Server to connect to your Infoblox application to create and manage DHCP and DNS records, and to reserve IP addresses. The supported Infoblox version is NIOS 8.0 or higher. 5.3.1. Infoblox limitations All DHCP and DNS records can be managed only in a single Network or DNS view. After you install the Infoblox modules on Satellite Server and set up the view using the satellite-installer command, you cannot edit the view. Satellite Server communicates with a single Infoblox node by using the standard HTTPS web API. If you want to configure clustering and High Availability, make the configurations in Infoblox. Hosting PXE-related files by using the TFTP functionality of Infoblox is not supported. You must use Satellite Server as a TFTP server for PXE provisioning. For more information, see Configuring networking in Provisioning hosts . Satellite IPAM feature cannot be integrated with Infoblox. 5.3.2. Infoblox prerequisites You must have Infoblox account credentials to manage DHCP and DNS entries in Satellite. Ensure that you have Infoblox administration roles with the names: DHCP Admin and DNS Admin . The administration roles must have permissions or belong to an admin group that permits the accounts to perform tasks through the Infoblox API. 5.3.3. Installing the Infoblox CA certificate You must install Infoblox HTTPS CA certificate on the base system of Satellite Server. Procedure Download the certificate from the Infoblox web UI or you use the following OpenSSL commands to download the certificate: The infoblox.example.com entry must match the host name for the Infoblox application in the X509 certificate. Verification Test the CA certificate by using a curl query: Example positive response: 5.3.4. Installing the DHCP Infoblox module Install the DHCP Infoblox module on Satellite Server. Note that you cannot manage records in separate views. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Section 5.3.5, "Installing the DNS Infoblox module" . DHCP Infoblox record type considerations If you want to use the DHCP and DNS Infoblox modules together, configure the DHCP Infoblox module with the fixedaddress record type only. The host record type causes DNS conflicts and is not supported. If you configure the DHCP Infoblox module with the host record type, you have to unset both DNS Capsule and Reverse DNS Capsule options on your Infoblox-managed subnets, because Infoblox does DNS management by itself. Using the host record type leads to creating conflicts and being unable to rename hosts in Satellite. Procedure On Satellite Server, enter the following command: Optional: In the Satellite web UI, navigate to Infrastructure > Capsules , select the Capsule with the DHCP Infoblox module, and ensure that the dhcp feature is listed. In the Satellite web UI, navigate to Infrastructure > Subnets . For all subnets managed through Infoblox, ensure that the IP address management ( IPAM ) method of the subnet is set to DHCP . 5.3.5. Installing the DNS Infoblox module Install the DNS Infoblox module on Satellite Server. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Section 5.3.4, "Installing the DHCP Infoblox module" . Procedure On Satellite Server, enter the following command to configure the Infoblox module: Optionally, you can change the value of the --foreman-proxy-plugin-dns-infoblox-dns-view option to specify an Infoblox DNS view other than the default view. Optional: In the Satellite web UI, navigate to Infrastructure > Capsules , select the Capsule with the Infoblox DNS module, and ensure that the dns feature is listed. In the Satellite web UI, navigate to Infrastructure > Domains . For all domains managed through Infoblox, ensure that the DNS Proxy is set for those domains. In the Satellite web UI, navigate to Infrastructure > Subnets . For all subnets managed through Infoblox, ensure that the DNS Capsule and Reverse DNS Capsule are set for those subnets. 5.4. Configuring Satellite Server with external TFTP You can configure Satellite Server with external TFTP services. Procedure Create the TFTP directory for NFS: In the /etc/fstab file, add the following line: Mount the file systems in /etc/fstab : Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/tftp.yml file: If the TFTP service is running on a different server than the DHCP service, update the tftp_servername setting with the FQDN or IP address of the server that the TFTP service is running on: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the TFTP service with the appropriate subnets and domain. 5.5. Configuring Satellite Server with external IdM DNS When Satellite Server adds a DNS record for a host, it first determines which Capsule is providing DNS for that domain. It then communicates with the Capsule that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the Satellite or Capsule that is currently configured to provide a DNS service for the domain you want to manage by using the IdM server. Satellite Server can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service. For more information about Red Hat Identity Management, see the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . To configure Satellite Server to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures: Section 5.5.1, "Configuring dynamic DNS update with GSS-TSIG authentication" Section 5.5.2, "Configuring dynamic DNS update with TSIG authentication" To revert to internal DNS service, use the following procedure: Section 5.5.3, "Reverting to internal DNS service" Note You are not required to use Satellite Server to manage DNS. When you are using the realm enrollment feature of Satellite, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client. Configuring Satellite Server with external IdM DNS and realm enrollment are mutually exclusive. For more information about configuring realm enrollment, see Section 5.6, "Configuring Satellite to manage the lifecycle of a host registered to a Identity Management realm" . 5.5.1. Configuring dynamic DNS update with GSS-TSIG authentication You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645 . To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the Satellite Server base operating system. Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port requirements for IdM in Red Hat Enterprise Linux 9 Installing Identity Management or Port requirements for IdM in Red Hat Enterprise Linux 8 Installing Identity Management . You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server. You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps: Creating a Kerberos principal on the IdM server Obtain a Kerberos ticket for the account obtained from the IdM administrator: Create a new Kerberos principal for Satellite Server to use to authenticate on the IdM server: Installing and configuring the idM client On the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment, install the ipa-client package: Configure the IdM client by running the installation script and following the on-screen prompts: Obtain a Kerberos ticket: Remove any preexisting keytab : Obtain the keytab for this system: Note When adding a keytab to a standby system with the same host name as the original system in service, add the r option to prevent generating new credentials and rendering the credentials on the original system invalid. For the dns.keytab file, set the group and owner to foreman-proxy : Optional: To verify that the keytab file is valid, enter the following command: Configuring DNS zones in the IdM web UI Create and configure the zone that you want to manage: Navigate to Network Services > DNS > DNS Zones . Select Add and enter the zone name. For example, example.com . Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Enable Allow PTR sync . Click Save to save the changes. Create and configure the reverse zone: Navigate to Network Services > DNS > DNS Zones . Click Add . Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups. Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Click Save to save the changes. Configuring the Satellite or Capsule Server that manages the DNS service for the domain Configure your Satellite Server or Capsule Server to connect to your DNS service: For each affected Capsule, update the configuration of that Capsule in the Satellite web UI: In the Satellite web UI, navigate to Infrastructure > Capsules , locate the Satellite Server, and from the list in the Actions column, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and select the domain name. In the Domain tab, ensure DNS Capsule is set to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to None . In the Domains tab, select the domain that you want to manage using the IdM server. In the Capsules tab, ensure Reverse DNS Capsule is set to the Capsule where the subnet is connected. Click Submit to save the changes. 5.5.2. Configuring dynamic DNS update with TSIG authentication You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key key file for authentication. The TSIG protocol is defined in RFC2845 . Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . You must obtain root user access on the IdM server. You must confirm whether Satellite Server or Capsule Server is configured to provide DNS service for your deployment. You must configure DNS, DHCP and TFTP services on the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment. You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with TSIG authentication, complete the following steps: Enabling external updates to the DNS zone in the IdM server On the IdM Server, add the following to the top of the /etc/named.conf file: ######################################################################## include "/etc/rndc.key"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { "rndc-key"; }; }; ######################################################################## Reload the named service to make the changes take effect: In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes: Add the following in the BIND update policy box: grant "rndc-key" zonesub ANY; Set Dynamic update to True . Click Update to save the changes. Copy the /etc/rndc.key file from the IdM server to the base operating system of your Satellite Server. Enter the following command: To set the correct ownership, permissions, and SELinux context for the rndc.key file, enter the following command: Assign the foreman-proxy user to the named group manually. Normally, satellite-installer ensures that the foreman-proxy user belongs to the named UNIX group, however, in this scenario Satellite does not manage users and groups, therefore you need to assign the foreman-proxy user to the named group manually. On Satellite Server, enter the following satellite-installer command to configure Satellite to use the external DNS server: Testing external updates to the DNS zone in the IdM server Ensure that the key in the /etc/rndc.key file on Satellite Server is the same key file that is used on the IdM server: key "rndc-key" { algorithm hmac-md5; secret " secret-key =="; }; On Satellite Server, create a test DNS entry for a host. For example, host test.example.com with an A record of 192.168.25.20 on the IdM server at 192.168.25.1 . On Satellite Server, test the DNS entry: Example output: Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20 To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones . Click the name of the zone and search for the host by name. If resolved successfully, remove the test DNS entry: Confirm that the DNS entry was removed: The above nslookup command fails and returns the SERVFAIL error message if the record was successfully deleted. 5.5.3. Reverting to internal DNS service You can revert to using Satellite Server and Capsule Server as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file. For more information about answer files, see Configuring Satellite Server . Procedure On the Satellite or Capsule Server that you want to configure to manage DNS service for the domain, complete the following steps: Configuring Satellite or Capsule as a DNS server If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the satellite-installer command: If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure Satellite or Capsule as DNS server without using an answer file, enter the following satellite-installer command on Satellite or Capsule: For more information, see Configuring DNS, DHCP, and TFTP on Capsule Server . After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules . For each Capsule that you want to update, from the Actions list, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and click the domain name that you want to configure. In the Domain tab, set DNS Capsule to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to DHCP or Internal DB . In the Domains tab, select the domain that you want to manage using Satellite or Capsule. In the Capsules tab, set Reverse DNS Capsule to the Capsule where the subnet is connected. Click Submit to save the changes. 5.6. Configuring Satellite to manage the lifecycle of a host registered to a Identity Management realm As well as providing access to Satellite Server, hosts provisioned with Satellite can also be integrated with Identity Management realms. Red Hat Satellite has a realm feature that automatically manages the lifecycle of any system registered to a realm or domain provider. Use this section to configure Satellite Server or Capsule Server for Identity Management realm support, then add hosts to the Identity Management realm group. Prerequisites Satellite Server that is registered to the Content Delivery Network or an external Capsule Server that is registered to Satellite Server. A deployed realm or domain provider such as Identity Management. To install and configure Identity Management packages on Satellite Server or Capsule Server: To use Identity Management for provisioned hosts, complete the following steps to install and configure Identity Management packages on Satellite Server or Capsule Server: Install the ipa-client package on Satellite Server or Capsule Server: Configure the server as a Identity Management client: Create a realm proxy user, realm-capsule , and the relevant roles in Identity Management: Note the principal name that returns and your Identity Management server configuration details because you require them for the following procedure. To configure Satellite Server or Capsule Server for Identity Management realm support: Complete the following procedure on Satellite and every Capsule that you want to use: Copy the /root/freeipa.keytab file to any Capsule Server that you want to include in the same principal and realm: Move the /root/freeipa.keytab file to the /etc/foreman-proxy directory and set the ownership settings to the foreman-proxy user: Enter the following command on all Capsules that you want to include in the realm. If you use the integrated Capsule on Satellite, enter this command on Satellite Server: You can also use these options when you first configure the Satellite Server. Ensure that the most updated versions of the ca-certificates package is installed and trust the Identity Management Certificate Authority: Optional: If you configure Identity Management on an existing Satellite Server or Capsule Server, complete the following steps to ensure that the configuration changes take effect: Restart the foreman-proxy service: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule you have configured for Identity Management and from the list in the Actions column, select Refresh . To create a realm for the Identity Management-enabled Capsule After you configure your integrated or external Capsule with Identity Management, you must create a realm and add the Identity Management-configured Capsule to the realm. Procedure In the Satellite web UI, navigate to Infrastructure > Realms and click Create Realm . In the Name field, enter a name for the realm. From the Realm Type list, select the type of realm. From the Realm Capsule list, select Capsule Server where you have configured Identity Management. Click the Locations tab and from the Locations list, select the location where you want to add the new realm. Click the Organizations tab and from the Organizations list, select the organization where you want to add the new realm. Click Submit . Updating host groups with realm information You must update any host groups that you want to use with the new realm information. In the Satellite web UI, navigate to Configure > Host Groups , select the host group that you want to update, and click the Network tab. From the Realm list, select the realm you create as part of this procedure, and then click Submit . Adding hosts to a Identity Management host group Identity Management supports the ability to set up automatic membership rules based on a system's attributes. Red Hat Satellite's realm feature provides administrators with the ability to map the Red Hat Satellite host groups to the Identity Management parameter userclass which allow administrators to configure automembership. When nested host groups are used, they are sent to the Identity Management server as they are displayed in the Red Hat Satellite User Interface. For example, "Parent/Child/Child". Satellite Server or Capsule Server sends updates to the Identity Management server, however automembership rules are only applied at initial registration. To add hosts to a Identity Management host group: On the Identity Management server, create a host group: Create an automembership rule: Where you can use the following options: automember-add flags the group as an automember group. --type=hostgroup identifies that the target group is a host group, not a user group. automember_rule adds the name you want to identify the automember rule by. Define an automembership condition based on the userclass attribute: Where you can use the following options: automember-add-condition adds regular expression conditions to identify group members. --key=userclass specifies the key attribute as userclass . --type=hostgroup identifies that the target group is a host group, not a user group. --inclusive-regex= ^webserver identifies matching values with a regular expression pattern. hostgroup_name - identifies the target host group's name. When a system is added to Satellite Server's hostgroup_name host group, it is added automatically to the Identity Management server's " hostgroup_name " host group. Identity Management host groups allow for Host-Based Access Controls (HBAC), sudo policies and other Identity Management functions. | [
"scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key",
"restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key",
"echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key",
"dnf install dhcp-server bind-utils",
"tsig-keygen -a hmac-md5 omapi_key",
"cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;",
"firewall-cmd --add-service dhcp",
"firewall-cmd --runtime-to-permanent",
"id -u foreman 993 id -g foreman 990",
"groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman",
"chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf",
"systemctl enable --now dhcpd",
"dnf install nfs-utils systemctl enable --now nfs-server",
"mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp",
"/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0",
"mount -a",
"/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)",
"exportfs -rva",
"firewall-cmd --add-port=7911/tcp",
"firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public",
"firewall-cmd --runtime-to-permanent",
"satellite-maintain packages install nfs-utils",
"mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd",
"chown -R foreman-proxy /mnt/nfs",
"showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN",
"DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0",
"mount -a",
"su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit",
"satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911",
"update-ca-trust enable openssl s_client -showcerts -connect infoblox.example.com :443 </dev/null | openssl x509 -text >/etc/pki/ca-trust/source/anchors/infoblox.crt update-ca-trust extract",
"curl -u admin:password https:// infoblox.example.com /wapi/v2.0/network",
"[ { \"_ref\": \"network/ZG5zLm5ldHdvcmskMTkyLjE2OC4yMDIuMC8yNC8w: infoblox.example.com /24/default\", \"network\": \"192.168.202.0/24\", \"network_view\": \"default\" } ]",
"satellite-installer --enable-foreman-proxy-plugin-dhcp-infoblox --foreman-proxy-dhcp true --foreman-proxy-dhcp-provider infoblox --foreman-proxy-dhcp-server infoblox.example.com --foreman-proxy-plugin-dhcp-infoblox-username admin --foreman-proxy-plugin-dhcp-infoblox-password infoblox --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress --foreman-proxy-plugin-dhcp-infoblox-dns-view default --foreman-proxy-plugin-dhcp-infoblox-network-view default",
"satellite-installer --enable-foreman-proxy-plugin-dns-infoblox --foreman-proxy-dns true --foreman-proxy-dns-provider infoblox --foreman-proxy-plugin-dns-infoblox-dns-server infoblox.example.com --foreman-proxy-plugin-dns-infoblox-username admin --foreman-proxy-plugin-dns-infoblox-password infoblox --foreman-proxy-plugin-dns-infoblox-dns-view default",
"mkdir -p /mnt/nfs/var/lib/tftpboot",
"TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0",
"mount -a",
"satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true",
"satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN",
"kinit idm_user",
"ipa service-add capsule/satellite.example.com",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"kinit admin",
"rm /etc/foreman-proxy/dns.keytab",
"ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab",
"chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab",
"kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]",
"grant capsule\\047 [email protected] wildcard * ANY;",
"grant capsule\\047 [email protected] wildcard * ANY;",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true",
"######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################",
"systemctl reload named",
"grant \"rndc-key\" zonesub ANY;",
"scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key",
"restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key",
"usermod -a -G named foreman-proxy",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key",
"key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };",
"echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20",
"echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"satellite-installer",
"satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"foreman-prepare-realm admin realm-capsule",
"scp /root/freeipa.keytab root@ capsule.example.com :/etc/foreman-proxy/freeipa.keytab",
"mv /root/freeipa.keytab /etc/foreman-proxy chown foreman-proxy:foreman-proxy /etc/foreman-proxy/freeipa.keytab",
"satellite-installer --foreman-proxy-realm true --foreman-proxy-realm-keytab /etc/foreman-proxy/freeipa.keytab --foreman-proxy-realm-principal [email protected] --foreman-proxy-realm-provider freeipa",
"cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt update-ca-trust enable update-ca-trust",
"systemctl restart foreman-proxy",
"ipa hostgroup-add hostgroup_name --desc= hostgroup_description",
"ipa automember-add --type=hostgroup hostgroup_name automember_rule",
"ipa automember-add-condition --key=userclass --type=hostgroup --inclusive-regex= ^webserver hostgroup_name ---------------------------------- Added condition(s) to \" hostgroup_name \" ---------------------------------- Automember Rule: automember_rule Inclusive Regex: userclass= ^webserver ---------------------------- Number of conditions added 1 ----------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_satellite_server_in_a_connected_network_environment/configuring-external-services |
Chapter 2. Optimizing systemd to shorten the boot time | Chapter 2. Optimizing systemd to shorten the boot time As a system administrator, you can optimize performance of your system and shorten the boot time. You can review the services that systemd starts during boot and evaluate their necessity. Disabling certain services to start at boot can improve the boot time of your system. 2.1. Examining system boot performance To examine system boot performance, you can use the systemd-analyze command. By using certain options, you can tune systemd to shorten the boot time. Prerequisites Optional: Before you examine systemd to tune the boot time, list all enabled services: Procedure Choose the information you want to analyze: Analyze the information about the time that the last successful boot took: Analyze the unit initialization time of each systemd unit: The output lists the units in descending order according to the time they took to initialize during the last successful boot. Identify critical units that took the longest time to initialize at the last successful boot: The output highlights the units that critically slow down the boot with the red color. Figure 2.1. The output of the systemd-analyze critical-chain command Additional resources systemd-analyze (1) man page 2.2. A guide to selecting services that can be safely disabled You can shorten the boot time of your system by disabling certain services that are enabled on boot by default. List enabled services: Disable a service: Certain services must stay enabled so that your operating system is safe and functions in the way you need. Refer to the following table as a guide to selecting the services that you can safely disable. The table lists all services enabled by default on a minimal installation of Red Hat Enterprise Linux. Table 2.1. Services enabled by default on a minimal installation of RHEL Service name Can it be disabled? More information auditd.service yes Disable auditd.service only if you do not need audit messages from the kernel. Be aware that if you disable auditd.service , the /var/log/audit/audit.log file is not produced. Consequently, you are not able to retroactively review some commonly-reviewed actions or events, such as user logins, service starts or password changes. Also note that auditd has two parts: a kernel part, and a service itself. By using the systemctl disable auditd command, you only disable the service, but not the kernel part. To disable system auditing in its entirety, set audit=0 on kernel command line. [email protected] no This service runs only when it is really needed, so it does not need to be disabled. crond.service yes Be aware that no items from crontab will run if you disable crond.service. dbus-org.fedoraproject.FirewallD1.service yes A symlink to firewalld.service dbus-org.freedesktop.NetworkManager.service yes A symlink to NetworkManager.service dbus-org.freedesktop.nm-dispatcher.service yes A symlink to NetworkManager-dispatcher.service firewalld.service yes Disable firewalld.service only if you do not need firewall. [email protected] no This service runs only when it is really needed, so it does not need to be disabled. import-state.service yes Disable import-state.service only if you do not need to boot from a network storage. irqbalance.service yes Disable irqbalance.service only if you have just one CPU. Do not disable irqbalance.service on systems with multiple CPUs. kdump.service yes Disable kdump.service only if you do not need reports from kernel crashes. loadmodules.service yes This service is not started unless the /etc/rc.modules or /etc/sysconfig/modules directory exists, which means that it is not started on a minimal RHEL installation. lvm2-monitor.service yes Disable lvm2-monitor.service only if you do not use Logical Volume Manager (LVM). microcode.service no Do not be disable the service because it provides updates of the microcode software in CPU. NetworkManager-dispatcher.service yes Disable NetworkManager-dispatcher.service only if you do not need notifications on network configuration changes (for example in static networks). NetworkManager-wait-online.service yes Disable NetworkManager-wait-online.service only if you do not need working network connection available right after the boot. If the service is enabled, the system does not finish the boot before the network connection is working. This may prolong the boot time significantly. NetworkManager.service yes Disable NetworkManager.service only if you do not need connection to a network. nis-domainname.service yes Disable nis-domainname.service only if you do not use Network Information Service (NIS). rhsmcertd.service no rngd.service yes Disable rngd.service only if you do not need much entropy on your system, or you do not have any sort of hardware generator. Note that the service is necessary in environments that require a lot of good entropy, such as systems used for generation of X.509 certificates (for example the FreeIPA server). rsyslog.service yes Disable rsyslog.service only if you do not need persistent logs, or you set systemd-journald to persistent mode. selinux-autorelabel-mark.service yes Disable selinux-autorelabel-mark.service only if you do not use SELinux. sshd.service yes Disable sshd.service only if you do not need remote logins by OpenSSH server. sssd.service yes Disable sssd.service only if there are no users who log in the system over the network (for example by using LDAP or Kerberos). Red Hat recommends to disable all sssd-* units if you disable sssd.service . syslog.service yes An alias for rsyslog.service tuned.service yes Disable tuned.service only if you do need to use performance tuning. lvm2-lvmpolld.socket yes Disable lvm2-lvmpolld.socket only if you do not use Logical Volume Manager (LVM). dnf-makecache.timer yes Disable dnf-makecache.timer only if you do not need your package metadata to be updated automatically. unbound-anchor.timer yes Disable unbound-anchor.timer only if you do not need daily update of the root trust anchor for DNS Security Extensions (DNSSEC). This root trust anchor is used by Unbound resolver and resolver library for DNSSEC validation. To find more information about a service, use one of the following commands: The systemctl cat command provides the content of the respective /usr/lib/systemd/system/ <service> service file, as well as all applicable overrides. The applicable overrides include unit file overrides from the /etc/systemd/system/ <service> file or drop-in files from a corresponding unit.type.d directory. Additional resources The systemd.unit(5) man page The systemd help command that shows the man page of a particular service 2.3. Additional resources systemctl (1) man page systemd (1) man page systemd-delta (1) man page systemd.directives (7) man page systemd.unit (5) man page systemd.service (5) man page systemd.target (5) man page systemd.kill (5) man page systemd Home Page | [
"systemctl list-unit-files --state=enabled",
"systemd-analyze",
"systemd-analyze blame",
"systemd-analyze critical-chain",
"systemctl list-unit-files --state=enabled",
"systemctl disable <service_name>",
"systemctl cat <service_name>",
"systemctl help <service_name>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_systemd_unit_files_to_customize_and_optimize_your_system/optimizing-systemd-to-shorten-the-boot-time_working-with-systemd |
7.26. clustermon | 7.26. clustermon 7.26.1. RHBA-2015:1413 - clustermon bug fix update Updated clustermon packages that fix one bug are now available for Red Hat Enterprise Linux 6. The clustermon packages are used for remote cluster management. The modclusterd service provides an abstraction of cluster status used by the Conga architecture and by the Simple Network Management (SNMP) and Common Information Model (CIM) modules of clustermon. Bug Fix BZ# 1111249 , BZ# 1114622 The internal ricci API has been extended with an ability to temporarily stop a clustered resource, which was used to resolve the BZ#1111249 enhancement request in the luci packages, documented in the RHBA-2015:20054 erratum. Users of clustermon are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-clustermon |
Chapter 1. Storage overview | Chapter 1. Storage overview MicroShift supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in a Red Hat build of MicroShift cluster. 1.1. Storage types MicroShift storage is broadly classified into two categories, namely ephemeral storage and persistent storage. 1.1.1. Ephemeral storage Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. To read details about ephemeral storage, click Understanding ephemeral storage . 1.1.2. Persistent storage Stateful applications deployed in containers require persistent storage. MicroShift uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For persistent storage details, read Understanding persistent storage . 1.1.3. Dynamic storage provisioning Using dynamic provisioning allows you to create storage volumes on-demand, eliminating the need for pre-provisioned storage. For more information about how dynamic provisioning works in Red Hat build of MicroShift, read Dynamic provisioning . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/storage/storage-overview-microshift |
3.3. NFS Share Setup | 3.3. NFS Share Setup The following procedure configures the NFS share for the NFS daemon failover. You need to perform this procedure on only one node in the cluster. Create the /nfsshare directory. Mount the ext4 file system that you created in Section 3.2, "Configuring an LVM Volume with an ext4 File System" on the /nfsshare directory. Create an exports directory tree on the /nfsshare directory. Place files in the exports directory for the NFS clients to access. For this example, we are creating test files named clientdatafile1 and clientdatafile2 . Unmount the ext4 file system and deactivate the LVM volume group. | [
"mkdir /nfsshare",
"mount /dev/my_vg/my_lv /nfsshare",
"mkdir -p /nfsshare/exports mkdir -p /nfsshare/exports/export1 mkdir -p /nfsshare/exports/export2",
"touch /nfsshare/exports/export1/clientdatafile1 touch /nfsshare/exports/export2/clientdatafile2",
"umount /dev/my_vg/my_lv vgchange -an my_vg"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/s1-nfssharesetup-haaa |
Chapter 227. Mock Component | Chapter 227. Mock Component Available as of Camel version 1.0 Testing of distributed and asynchronous processing is notoriously difficult. The Mock , Test and DataSet endpoints work great with the Camel Testing Framework to simplify your unit and integration testing using Enterprise Integration Patterns and Camel's large range of Components together with the powerful Bean Integration. The Mock component provides a powerful declarative testing mechanism, which is similar to jMock in that it allows declarative expectations to be created on any Mock endpoint before a test begins. Then the test is run, which typically fires messages to one or more endpoints, and finally the expectations can be asserted in a test case to ensure the system worked as expected. This allows you to test various things like: The correct number of messages are received on each endpoint, The correct payloads are received, in the right order, Messages arrive on an endpoint in order, using some Expression to create an order testing function, Messages arrive match some kind of Predicate such as that specific headers have certain values, or that parts of the messages match some predicate, such as by evaluating an XPath or XQuery Expression. Note There is also the Test endpoint which is a Mock endpoint, but which uses a second endpoint to provide the list of expected message bodies and automatically sets up the Mock endpoint assertions. In other words, it's a Mock endpoint that automatically sets up its assertions from some sample messages in a File or database , for example. Caution Mock endpoints keep received Exchanges in memory indefinitely. Remember that Mock is designed for testing. When you add Mock endpoints to a route, each Exchange sent to the endpoint will be stored (to allow for later validation) in memory until explicitly reset or the JVM is restarted. If you are sending high volume and/or large messages, this may cause excessive memory use. If your goal is to test deployable routes inline, consider using NotifyBuilder or AdviceWith in your tests instead of adding Mock endpoints to routes directly. From Camel 2.10 onwards there are two new options retainFirst , and retainLast that can be used to limit the number of messages the Mock endpoints keep in memory. 227.1. URI format Where someName can be any string that uniquely identifies the endpoint. You can append query options to the URI in the following format, ?option=value&option=value&... 227.2. Options The Mock component has no options. The Mock endpoint is configured using URI syntax: with the following path and query parameters: 227.2.1. Path Parameters (1 parameters): Name Description Default Type name Required Name of mock endpoint String 227.2.2. Query Parameters (10 parameters): Name Description Default Type assertPeriod (producer) Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used for example to assert that exactly a number of messages arrives. For example if expectedMessageCount(int) was set to 5, then the assertion is satisfied when 5 or more message arrives. To ensure that exactly 5 messages arrives, then you would need to wait a little period to ensure no further message arrives. This is what you can use this setAssertPeriod(long) method for. By default this period is disabled. 0 long expectedCount (producer) Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly n'th message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details. -1 int reportGroup (producer) A number that is used to turn on throughput logging based on groups of the size. int resultMinimumWaitTime (producer) Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied 0 long resultWaitTime (producer) Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied 0 long retainFirst (producer) Specifies to only retain the first n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int retainLast (producer) Specifies to only retain the last n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int sleepForEmptyTest (producer) Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero 0 long copyOnExchange (producer) Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 227.3. Simple Example Here's a simple example of Mock endpoint in use. First, the endpoint is resolved on the context. Then we set an expectation, and then, after the test has run, we assert that our expectations have been met: MockEndpoint resultEndpoint = context.resolveEndpoint("mock:foo", MockEndpoint.class); // set expectations resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied(); You typically always call the assertIsSatisfied() method to test that the expectations were met after running a test. Camel will by default wait 10 seconds when the assertIsSatisfied() is invoked. This can be configured by setting the setResultWaitTime(millis) method. 227.4. Using assertPeriod Available as of Camel 2.7 When the assertion is satisfied then Camel will stop waiting and continue from the assertIsSatisfied method. That means if a new message arrives on the mock endpoint, just a bit later, that arrival will not affect the outcome of the assertion. Suppose you do want to test that no new messages arrives after a period thereafter, then you can do that by setting the setAssertPeriod method, for example: MockEndpoint resultEndpoint = context.resolveEndpoint("mock:foo", MockEndpoint.class); resultEndpoint.setAssertPeriod(5000); resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied(); 227.5. Setting expectations You can see from the Javadoc of MockEndpoint the various helper methods you can use to set expectations. The main methods are as follows: Method Description expectedMessageCount(int) To define the expected message count on the endpoint. expectedMinimumMessageCount(int) To define the minimum number of expected messages on the endpoint. expectedBodiesReceived(... ) To define the expected bodies that should be received (in order). expectedHeaderReceived(... ) To define the expected header that should be received expectsAscending(Expression) To add an expectation that messages are received in order, using the given Expression to compare messages. expectsDescending(Expression) To add an expectation that messages are received in order, using the given Expression to compare messages. expectsNoDuplicates(Expression) To add an expectation that no duplicate messages are received; using an Expression to calculate a unique identifier for each message. This could be something like the JMSMessageID if using JMS, or some unique reference number within the message. Here's another example: resultEndpoint.expectedBodiesReceived("firstMessageBody", "secondMessageBody", "thirdMessageBody"); 227.6. Adding expectations to specific messages In addition, you can use the message(int messageIndex) method to add assertions about a specific message that is received. For example, to add expectations of the headers or body of the first message (using zero-based indexing like java.util.List ), you can use the following code: resultEndpoint.message(0).header("foo").isEqualTo("bar"); There are some examples of the Mock endpoint in use in the camel-core processor tests . 227.7. Mocking existing endpoints Available as of Camel 2.7 Camel now allows you to automatically mock existing endpoints in your Camel routes. Note How it works The endpoints are still in action. What happens differently is that a Mock endpoint is injected and receives the message first and then delegates the message to the target endpoint. You can view this as a kind of intercept and delegate or endpoint listener. Suppose you have the given route below: Route @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("direct:foo").to("log:foo").to("mock:result"); from("direct:foo").transform(constant("Bye World")); } }; } You can then use the adviceWith feature in Camel to mock all the endpoints in a given route from your unit test, as shown below: adviceWith mocking all endpoints public void testAdvisedMockEndpoints() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock all endpoints mockEndpoints(); } }); getMockEndpoint("mock:direct:start").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // all the endpoints was mocked assertNotNull(context.hasEndpoint("mock:direct:start")); assertNotNull(context.hasEndpoint("mock:direct:foo")); assertNotNull(context.hasEndpoint("mock:log:foo")); } Notice that the mock endpoints is given the URI mock:<endpoint> , for example mock:direct:foo . Camel logs at INFO level the endpoints being mocked: Note Mocked endpoints are without parameters Endpoints which are mocked will have their parameters stripped off. For example the endpoint log:foo?showAll=true will be mocked to the following endpoint mock:log:foo . Notice the parameters have been removed. Its also possible to only mock certain endpoints using a pattern. For example to mock all log endpoints you do as shown: adviceWith mocking only log endpoints using a pattern public void testAdvisedMockEndpointsWithPattern() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock only log endpoints mockEndpoints("log*"); } }); // now we can refer to log:foo as a mock and set our expectations getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // only the log:foo endpoint was mocked assertNotNull(context.hasEndpoint("mock:log:foo")); assertNull(context.hasEndpoint("mock:direct:start")); assertNull(context.hasEndpoint("mock:direct:foo")); } The pattern supported can be a wildcard or a regular expression. See more details about this at Intercept as its the same matching function used by Camel. Note Mind that mocking endpoints causes the messages to be copied when they arrive on the mock. That means Camel will use more memory. This may not be suitable when you send in a lot of messages. 227.8. Mocking existing endpoints using the camel-test component Instead of using the adviceWith to instruct Camel to mock endpoints, you can easily enable this behavior when using the camel-test Test Kit. The same route can be tested as follows. Notice that we return "*" from the isMockEndpoints method, which tells Camel to mock all endpoints. If you only want to mock all log endpoints you can return "log*" instead. isMockEndpoints using camel-test kit public class IsMockEndpointsJUnit4Test extends CamelTestSupport { @Override public String isMockEndpoints() { // override this method and return the pattern for which endpoints to mock. // use * to indicate all return "*"; } @Test public void testMockAllEndpoints() throws Exception { // notice we have automatic mocked all endpoints and the name of the endpoints is "mock:uri" getMockEndpoint("mock:direct:start").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // all the endpoints was mocked assertNotNull(context.hasEndpoint("mock:direct:start")); assertNotNull(context.hasEndpoint("mock:direct:foo")); assertNotNull(context.hasEndpoint("mock:log:foo")); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("direct:foo").to("log:foo").to("mock:result"); from("direct:foo").transform(constant("Bye World")); } }; } } 227.9. Mocking existing endpoints with XML DSL If you do not use the camel-test component for unit testing (as shown above) you can use a different approach when using XML files for routes. The solution is to create a new XML file used by the unit test and then include the intended XML file which has the route you want to test. Suppose we have the route in the camel-route.xml file: camel-route.xml 1 <!-- this camel route is in the camel-route.xml file --> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="direct:foo"/> <to uri="log:foo"/> <to uri="mock:result"/> </route> <route> <from uri="direct:foo"/> <transform> <constant>Bye World</constant> </transform> </route> </camelContext> Then we create a new XML file as follows, where we include the camel-route.xml file and define a spring bean with the class org.apache.camel.impl.InterceptSendToMockEndpointStrategy which tells Camel to mock all endpoints: test-camel-route.xml <!-- the Camel route is defined in another XML file --> <import resource="camel-route.xml"/> <!-- bean which enables mocking all endpoints --> <bean id="mockAllEndpoints" class="org.apache.camel.component.mock.InterceptSendToMockEndpointStrategy"/> Then in your unit test you load the new XML file ( test-camel-route.xml ) instead of camel-route.xml . To only mock all Log endpoints you can define the pattern in the constructor for the bean: <bean id="mockAllEndpoints" class="org.apache.camel.impl.InterceptSendToMockEndpointStrategy"> <constructor-arg index="0" value="log*"/> </bean> 227.10. Mocking endpoints and skip sending to original endpoint Available as of Camel 2.10 Sometimes you want to easily mock and skip sending to a certain endpoints. So the message is detoured and send to the mock endpoint only. From Camel 2.10 onwards you can now use the mockEndpointsAndSkip method using AdviceWith or the Test Kit . The example below will skip sending to the two endpoints "direct:foo" , and "direct:bar" . adviceWith mock and skip sending to endpoints public void testAdvisedMockEndpointsWithSkip() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock sending to direct:foo and direct:bar and skip send to it mockEndpointsAndSkip("direct:foo", "direct:bar"); } }); getMockEndpoint("mock:result").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedMessageCount(1); getMockEndpoint("mock:direct:bar").expectedMessageCount(1); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to the seda endpoint SedaEndpoint seda = context.getEndpoint("seda:foo", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); } The same example using the Test Kit isMockEndpointsAndSkip using camel-test kit public class IsMockEndpointsAndSkipJUnit4Test extends CamelTestSupport { @Override public String isMockEndpointsAndSkip() { // override this method and return the pattern for which endpoints to mock, // and skip sending to the original endpoint. return "direct:foo"; } @Test public void testMockEndpointAndSkip() throws Exception { // notice we have automatic mocked the direct:foo endpoints and the name of the endpoints is "mock:uri" getMockEndpoint("mock:result").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedMessageCount(1); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to the seda endpoint SedaEndpoint seda = context.getEndpoint("seda:foo", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("direct:foo").to("mock:result"); from("direct:foo").transform(constant("Bye World")).to("seda:foo"); } }; } } 227.11. Limiting the number of messages to keep Available as of Camel 2.10 The Mock endpoints will by default keep a copy of every Exchange that it received. So if you test with a lot of messages, then it will consume memory. From Camel 2.10 onwards we have introduced two options retainFirst and retainLast that can be used to specify to only keep N'th of the first and/or last Exchanges. For example in the code below, we only want to retain a copy of the first 5 and last 5 Exchanges the mock receives. MockEndpoint mock = getMockEndpoint("mock:data"); mock.setRetainFirst(5); mock.setRetainLast(5); mock.expectedMessageCount(2000); mock.assertIsSatisfied(); Using this has some limitations. The getExchanges() and getReceivedExchanges() methods on the MockEndpoint will return only the retained copies of the Exchanges. So in the example above, the list will contain 10 Exchanges; the first five, and the last five. The retainFirst and retainLast options also have limitations on which expectation methods you can use. For example the expectedXXX methods that work on message bodies, headers, etc. will only operate on the retained messages. In the example above they can test only the expectations on the 10 retained messages. 227.12. Testing with arrival times Available as of Camel 2.7 The Mock endpoint stores the arrival time of the message as a property on the Exchange. Date time = exchange.getProperty(Exchange.RECEIVED_TIMESTAMP, Date.class); You can use this information to know when the message arrived on the mock. But it also provides foundation to know the time interval between the and message arrived on the mock. You can use this to set expectations using the arrives DSL on the Mock endpoint. For example to say that the first message should arrive between 0-2 seconds before the you can do: mock.message(0).arrives().noLaterThan(2).seconds().beforeNext(); You can also define this as that 2nd message (0 index based) should arrive no later than 0-2 seconds after the : mock.message(1).arrives().noLaterThan(2).seconds().afterPrevious(); You can also use between to set a lower bound. For example suppose that it should be between 1-4 seconds: mock.message(1).arrives().between(1, 4).seconds().afterPrevious(); You can also set the expectation on all messages, for example to say that the gap between them should be at most 1 second: mock.allMessages().arrives().noLaterThan(1).seconds().beforeNext(); Tip Time units In the example above we use seconds as the time unit, but Camel offers milliseconds , and minutes as well. 227.13. See Also Spring Testing Testing | [
"mock:someName[?options]",
"mock:name",
"MockEndpoint resultEndpoint = context.resolveEndpoint(\"mock:foo\", MockEndpoint.class); // set expectations resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied();",
"MockEndpoint resultEndpoint = context.resolveEndpoint(\"mock:foo\", MockEndpoint.class); resultEndpoint.setAssertPeriod(5000); resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied();",
"resultEndpoint.expectedBodiesReceived(\"firstMessageBody\", \"secondMessageBody\", \"thirdMessageBody\");",
"resultEndpoint.message(0).header(\"foo\").isEqualTo(\"bar\");",
"@Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"direct:foo\").to(\"log:foo\").to(\"mock:result\"); from(\"direct:foo\").transform(constant(\"Bye World\")); } }; }",
"public void testAdvisedMockEndpoints() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock all endpoints mockEndpoints(); } }); getMockEndpoint(\"mock:direct:start\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:log:foo\").expectedBodiesReceived(\"Bye World\"); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Bye World\"); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint(\"direct:start\")); assertNotNull(context.hasEndpoint(\"direct:foo\")); assertNotNull(context.hasEndpoint(\"log:foo\")); assertNotNull(context.hasEndpoint(\"mock:result\")); // all the endpoints was mocked assertNotNull(context.hasEndpoint(\"mock:direct:start\")); assertNotNull(context.hasEndpoint(\"mock:direct:foo\")); assertNotNull(context.hasEndpoint(\"mock:log:foo\")); }",
"INFO Adviced endpoint [direct://foo] with mock endpoint [mock:direct:foo]",
"public void testAdvisedMockEndpointsWithPattern() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock only log endpoints mockEndpoints(\"log*\"); } }); // now we can refer to log:foo as a mock and set our expectations getMockEndpoint(\"mock:log:foo\").expectedBodiesReceived(\"Bye World\"); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Bye World\"); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint(\"direct:start\")); assertNotNull(context.hasEndpoint(\"direct:foo\")); assertNotNull(context.hasEndpoint(\"log:foo\")); assertNotNull(context.hasEndpoint(\"mock:result\")); // only the log:foo endpoint was mocked assertNotNull(context.hasEndpoint(\"mock:log:foo\")); assertNull(context.hasEndpoint(\"mock:direct:start\")); assertNull(context.hasEndpoint(\"mock:direct:foo\")); }",
"public class IsMockEndpointsJUnit4Test extends CamelTestSupport { @Override public String isMockEndpoints() { // override this method and return the pattern for which endpoints to mock. // use * to indicate all return \"*\"; } @Test public void testMockAllEndpoints() throws Exception { // notice we have automatic mocked all endpoints and the name of the endpoints is \"mock:uri\" getMockEndpoint(\"mock:direct:start\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:log:foo\").expectedBodiesReceived(\"Bye World\"); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Bye World\"); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint(\"direct:start\")); assertNotNull(context.hasEndpoint(\"direct:foo\")); assertNotNull(context.hasEndpoint(\"log:foo\")); assertNotNull(context.hasEndpoint(\"mock:result\")); // all the endpoints was mocked assertNotNull(context.hasEndpoint(\"mock:direct:start\")); assertNotNull(context.hasEndpoint(\"mock:direct:foo\")); assertNotNull(context.hasEndpoint(\"mock:log:foo\")); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"direct:foo\").to(\"log:foo\").to(\"mock:result\"); from(\"direct:foo\").transform(constant(\"Bye World\")); } }; } }",
"<!-- this camel route is in the camel-route.xml file --> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <to uri=\"direct:foo\"/> <to uri=\"log:foo\"/> <to uri=\"mock:result\"/> </route> <route> <from uri=\"direct:foo\"/> <transform> <constant>Bye World</constant> </transform> </route> </camelContext>",
"<!-- the Camel route is defined in another XML file --> <import resource=\"camel-route.xml\"/> <!-- bean which enables mocking all endpoints --> <bean id=\"mockAllEndpoints\" class=\"org.apache.camel.component.mock.InterceptSendToMockEndpointStrategy\"/>",
"<bean id=\"mockAllEndpoints\" class=\"org.apache.camel.impl.InterceptSendToMockEndpointStrategy\"> <constructor-arg index=\"0\" value=\"log*\"/> </bean>",
"public void testAdvisedMockEndpointsWithSkip() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock sending to direct:foo and direct:bar and skip send to it mockEndpointsAndSkip(\"direct:foo\", \"direct:bar\"); } }); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedMessageCount(1); getMockEndpoint(\"mock:direct:bar\").expectedMessageCount(1); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to the seda endpoint SedaEndpoint seda = context.getEndpoint(\"seda:foo\", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); }",
"public class IsMockEndpointsAndSkipJUnit4Test extends CamelTestSupport { @Override public String isMockEndpointsAndSkip() { // override this method and return the pattern for which endpoints to mock, // and skip sending to the original endpoint. return \"direct:foo\"; } @Test public void testMockEndpointAndSkip() throws Exception { // notice we have automatic mocked the direct:foo endpoints and the name of the endpoints is \"mock:uri\" getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedMessageCount(1); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to the seda endpoint SedaEndpoint seda = context.getEndpoint(\"seda:foo\", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"direct:foo\").to(\"mock:result\"); from(\"direct:foo\").transform(constant(\"Bye World\")).to(\"seda:foo\"); } }; } }",
"MockEndpoint mock = getMockEndpoint(\"mock:data\"); mock.setRetainFirst(5); mock.setRetainLast(5); mock.expectedMessageCount(2000); mock.assertIsSatisfied();",
"Date time = exchange.getProperty(Exchange.RECEIVED_TIMESTAMP, Date.class);",
"mock.message(0).arrives().noLaterThan(2).seconds().beforeNext();",
"mock.message(1).arrives().noLaterThan(2).seconds().afterPrevious();",
"mock.message(1).arrives().between(1, 4).seconds().afterPrevious();",
"mock.allMessages().arrives().noLaterThan(1).seconds().beforeNext();"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/mock-component |
Chapter 23. Boot Options | Chapter 23. Boot Options The Red Hat Enterprise Linux installation system includes a range of boot options for administrators, which modify the default behavior of the installation program by enabling (or disabling) certain functions. To use boot options, append them to the boot command line, as described in Section 23.1, "Configuring the Installation System at the Boot Menu" . Multiple options added to the boot line need to be separated by a single space. There are two basic types of options described in this chapter: Options presented as ending with an "equals" sign ( = ) require a value to be specified - they cannot be used on their own. For example, the inst.vncpassword= option must also contain a value (in this case, a password). The correct form is therefore inst.vncpassword= password . On its own, without a password specified, the option is invalid. Options presented without the " = " sign do not accept any values or parameters. For example, the rd.live.check option forces Anaconda to verify the installation media before starting the installation; if this option is present, the check will be performed, and if it is not present, the check will be skipped. 23.1. Configuring the Installation System at the Boot Menu Note The exact way to specify custom boot options is different on every system architecture. For architecture-specific instructions about editing boot options, see: Section 7.2, "The Boot Menu" for 64-bit AMD, Intel and ARM systems Section 12.1, "The Boot Menu" for IBM Power Systems servers Chapter 21, Parameter and Configuration Files on IBM Z for IBM Z There are several different ways to edit boot options at the boot menu (that is, the menu which appears after you boot the installation media): The boot: prompt, accessed by pressing the Esc key anywhere in the boot menu. When using this prompt, the first option must always specify the installation program image file to be loaded. In most cases, the image can be specified using the linux keyword. After that, additional options can be specified as needed. Pressing the Tab key at this prompt will display help in the form of usable commands where applicable. To start the installation with your options, press the Enter key. To return from the boot: prompt to the boot menu, restart the computer and boot from the installation media again. The > prompt on BIOS-based AMD64 and Intel 64 systems, accessed by highlighting an entry in the boot menu and pressing the Tab key. Unlike the boot: prompt, this prompt allows you to edit a predefined set of boot options. For example, if you highlight the entry labeled Test this media & install Red Hat Enterprise Linux 7.5 , a full set of options used by this menu entry will be displayed on the prompt, allowing you to add your own options. Pressing Enter will start the installation using the options you specified. To cancel editing and return to the boot menu, press the Esc key at any time. The GRUB2 menu on UEFI-based 64-bit AMD, Intel and ARM systems. If your system uses UEFI, you can edit boot options by highlighting an entry and pressing the e key. When you finish editing, press F10 or Ctrl + X to start the installation using the options you specified. In addition to the options described in this chapter, the boot prompt also accepts dracut kernel options. A list of these options is available as the dracut.cmdline(7) man page. Note Boot options specific to the installation program always start with inst. in this guide. Currently, this prefix is optional, for example, resolution=1024x768 will work exactly the same as inst.resolution=1024x768 . However, it is expected that the inst. prefix will be mandatory in future releases. Specifying the Installation Source inst.repo= Specifies the installation source - that is, a location where the installation program can find the images and packages it requires. For example: The target must be either: an installable tree, which is a directory structure containing the installation program's images, packages and repodata as well as a valid .treeinfo file a DVD (a physical disk present in the system's DVD drive) an ISO image of the full Red Hat Enterprise Linux installation DVD, placed on a hard drive or a network location accessible from the installation system (requires specifying NFS Server as the installation source) This option allows for the configuration of different installation methods using different formats. The syntax is described in the table below. Table 23.1. Installation Sources Installation source Option format Any CD/DVD drive inst.repo=cdrom Specific CD/DVD drive inst.repo=cdrom: device Hard Drive inst.repo=hd: device :/ path HMC inst.repo=hmc HTTP Server inst.repo=http:// host / path HTTPS Server inst.repo=https:// host / path FTP Server inst.repo=ftp:// username : password @ host / path NFS Server inst.repo=nfs:[ options :] server :/ path [a] [a] This option uses NFS protocol version 3 by default. To use a different version, add nfsvers= X to options , replacing X with the version number that you want to use. Note In releases of Red Hat Enterprise Linux, there were separate options for an installable tree accessible by NFS (the nfs option) and an ISO image located on an NFS source (the nfsiso option). In Red Hat Enterprise Linux 7, the installation program can automatically detect whether the source is an installable tree or a directory containing an ISO image, and the nfsiso option is deprecated. Disk device names can be set using the following formats: Kernel device name, for example /dev/sda1 or sdb2 File system label, for example LABEL=Flash or LABEL=RHEL7 File system UUID, for example UUID=8176c7bf-04ff-403a-a832-9557f94e61db Non-alphanumeric characters must be represented as \x NN , where NN is the hexadecimal representation of the character. For example, \x20 is a white space (" "). inst.stage2= Specifies the location of the installation program runtime image to be loaded. The syntax is the same as in Specifying the Installation Source . This option expects a path to a directory containing a valid .treeinfo file; the location of the runtime image will be read from this file if found. If a .treeinfo file is not available, Anaconda will try to load the image from LiveOS/squashfs.img . Use the option multiple times to specify multiple HTTP, HTTPS or FTP sources. Note By default, the inst.stage2= boot option is used on the installation media and set to a specific label (for example, inst.stage2=hd:LABEL=RHEL7\x20Server.x86_64 ). If you modify the default label of the file system containing the runtime image, or if you use a customized procedure to boot the installation system, you must ensure this option is set to the correct value. inst.dd= If you need to perform a driver update during the installation, use the inst.dd= option. It can be used multiple times. The location of a driver RPM package can be specified using any of the formats detailed in Specifying the Installation Source . With the exception of the inst.dd=cdrom option, the device name must always be specified. For example: Using this option without any parameters (only as inst.dd ) will prompt the installation program to ask you for a driver update disk with an interactive menu. Driver disks can also be loaded from a hard disk drive or a similar device instead of being loaded over the network or from initrd . Follow this procedure: Load the driver disk on a hard disk drive, a USB or any similar device. Set the label, for example, DD , to this device. Start the installation with: as the boot argument. Replace DD with a specific label and replace dd.rpm with a specific name. Use anything supported by the inst.repo command instead of LABEL to specify your hard disk drive. For more information about driver updates during the installation, see Chapter 6, Updating Drivers During Installation on AMD64 and Intel 64 Systems for AMD64 and Intel 64 systems and Chapter 11, Updating Drivers During Installation on IBM Power Systems for IBM Power Systems servers. Kickstart Boot Options inst.ks= Gives the location of a Kickstart file to be used to automate the installation. Locations can be specified using any of the formats valid for inst.repo . See Specifying the Installation Source for details. Use the option multiple times to specify multiple HTTP, HTTPS and FTP sources. If multiple HTTP, HTTPS and FTP locations are specified, the locations are tried sequentially until one succeeds: If you only specify a device and not a path, the installation program will look for the Kickstart file in /ks.cfg on the specified device. If you use this option without specifying a device, the installation program will use the following: In the above example, -server is the DHCP -server option or the IP address of the DHCP server itself, and filename is the DHCP filename option, or /kickstart/ . If the given file name ends with the / character, ip -kickstart is appended. For example: Table 23.2. Default Kickstart File Location DHCP server address Client address Kickstart file location 192.168.122.1 192.168.122.100 192.168.122.1 : /kickstart/192.168.122.100-kickstart Additionally, starting with Red Hat Enterprise Linux 7.2, the installer will attempt to load a Kickstart file named ks.cfg from a volume with a label of OEMDRV if present. If your Kickstart file is in this location, you do not need to use the inst.ks= boot option at all. inst.ks.sendmac Adds headers to outgoing HTTP requests with the MAC addresses of all network interfaces. For example: This can be useful when using inst.ks=http to provision systems. inst.ks.sendsn Adds a header to outgoing HTTP requests. This header will contain the system's serial number, read from /sys/class/dmi/id/product_serial . The header has the following syntax: Console, Environment and Display Options console= This kernel option specifies a device to be used as the primary console. For example, to use a console on the first serial port, use console=ttyS0 . This option should be used along with the inst.text option. You can use this option multiple times. In that case, the boot message will be displayed on all specified consoles, but only the last one will be used by the installation program afterwards. For example, if you specify console=ttyS0 console=ttyS1 , the installation program will use ttyS1 . noshell Disables access to the root shell during the installation. This is useful with automated (Kickstart) installations - if you use this option, a user can watch the installation progress, but they cannot interfere with it by accessing the root shell by pressing Ctrl + Alt + F2 . inst.lang= Sets the language to be used during the installation. Language codes are the same as the ones used in the lang Kickstart command as described in Section 27.3.1, "Kickstart Commands and Options" . On systems where the system-config-language package is installed, a list of valid values can also be found in /usr/share/system-config-language/locale-list . inst.geoloc= Configures geolocation usage in the installation program. Geolocation is used to preset the language and time zone, and uses the following syntax: inst.geoloc= value The value parameter can be any of the following: Table 23.3. Valid Values for the inst.geoloc Option Disable geolocation inst.geoloc=0 Use the Fedora GeoIP API inst.geoloc=provider_fedora_geoip Use the Hostip.info GeoIP API inst.geoloc=provider_hostip If this option is not specified, Anaconda will use provider_fedora_geoip . inst.keymap= Specifies the keyboard layout to be used by the installation program. Layout codes are the same as the ones used in the keyboard Kickstart command as described in Section 27.3.1, "Kickstart Commands and Options" . inst.text Forces the installation program to run in text mode instead of graphical mode. The text user interface is limited, for example, it does not allow you to modify the partition layout or set up LVM. When installing a system on a machine with a limited graphical capabilities, it is recommended to use VNC as described in Enabling Remote Access . inst.cmdline Forces the installation program to run in command line mode. This mode does not allow any interaction, all options must be specified in a Kickstart file or on the command line. inst.graphical Forces the installation program to run in graphical mode. This mode is the default. inst.resolution= Specifies the screen resolution in graphical mode. The format is N x M , where N is the screen width and M is the screen height (in pixels). The lowest supported resolution is 800x600 . inst.headless Specifies that the machine being installed onto does not have any display hardware. In other words, this option prevents the installation program from trying to detect a screen. inst.xdriver= Specifies the name of the X driver to be used both during the installation and on the installed system. inst.usefbx Tells the installation program to use the frame buffer X driver instead of a hardware-specific driver. This option is equivalent to inst.xdriver=fbdev . modprobe.blacklist= Blacklists (completely disables) one or more drivers. Drivers (mods) disabled using this option will be prevented from loading when the installation starts, and after the installation finishes, the installed system will keep these settings. The blacklisted drivers can then be found in the /etc/modprobe.d/ directory. Use a comma-separated list to disable multiple drivers. For example: inst.sshd=0 By default, sshd is only automatically started on IBM Z, and on other architectures, sshd is not started unless the inst.sshd option is used. This option prevents sshd from starting automatically on IBM Z. inst.sshd Starts the sshd service during the installation, which allows you to connect to the system during the installation using SSH and monitor its progress. For more information on SSH, see the ssh(1) man page and the corresponding chapter in the Red Hat Enterprise Linux 7 System Administrator's Guide . By default, sshd is only automatically started on IBM Z, and on other architectures, sshd is not started unless the inst.sshd option is used. Note During the installation, the root account has no password by default. You can set a root password to be used during the installation with the sshpw Kickstart command as described in Section 27.3.1, "Kickstart Commands and Options" . inst.kdump_addon= Enables or disables the Kdump configuration screen (add-on) in the installer. This screen is enabled by default; use inst.kdump_addon=off to disable it. Note that disabling the add-on will disable the Kdump screens in both the graphical and text-based interface as well as the %addon com_redhat_kdump Kickstart command. Network Boot Options Initial network initialization is handled by dracut . This section only lists some of the more commonly used options; for a complete list, see the dracut.cmdline(7) man page. Additional information on networking is also available in Red Hat Enterprise Linux 7 Networking Guide . ip= Configures one or more network interfaces. To configure multiple interfaces, you can use the ip option multiple times - once for each interface. If multiple interfaces are configured, you must also use the option rd.neednet=1 , and you must specify a primary boot interface using the bootdev option, described below. Alternatively, you can use the ip option once, and then use Kickstart to set up further interfaces. This option accepts several different formats. The most common are described in Table 23.4, "Network Interface Configuration Formats" . Table 23.4. Network Interface Configuration Formats Configuration Method Option format Automatic configuration of any interface ip= method Automatic configuration of a specific interface ip= interface : method Static configuration ip= ip :: gateway : netmask : hostname : interface :none Automatic configuration of a specific interface with an override [a] ip= ip :: gateway : netmask : hostname : interface : method : mtu [a] Brings up the specified interface using the specified method of automatic configuration, such as dhcp , but overrides the automatically obtained IP address, gateway, netmask, host name or other specified parameter. All parameters are optional; only specify the ones you want to override and automatically obtained values will be used for the others. The method parameter can be any the following: Table 23.5. Automatic Interface Configuration Methods Automatic configuration method Value DHCP dhcp IPv6 DHCP dhcp6 IPv6 automatic configuration auto6 iBFT (iSCSI Boot Firmware Table) ibft Note If you use a boot option which requires network access, such as inst.ks=http:// host / path , without specifying the ip option, the installation program will use ip=dhcp . Important To connect automatically to an iSCSI target, a network device for accessing the target needs to be activated. The recommended way to do so is to use ip=ibft boot option. In the above tables, the ip parameter specifies the client's IP address. IPv6 addresses can be specified by putting them in square brackets, for example, [2001:DB8::1] . The gateway parameter is the default gateway. IPv6 addresses are accepted here as well. The netmask parameter is the netmask to be used. This can either be a full netmask for IPv4 (for example 255.255.255.0 ) or a prefix for IPv6 (for example 64 ). The hostname parameter is the host name of the client system. This parameter is optional. nameserver= Specifies the address of the name server. This option can be used multiple times. rd.neednet= You must use the option rd.neednet=1 if you use more than one ip option. Alternatively, to set up multiple network interfaces you can use the ip once, and then set up further interfaces using Kickstart. bootdev= Specifies the boot interface. This option is mandatory if you use more than one ip option. ifname= Assigns a given interface name to a network device with a given MAC address. Can be used multiple times. The syntax is ifname= interface : MAC . For example: Note Using the ifname= option is the only supported way to set custom network interface names during installation. inst.dhcpclass= Specifies the DHCP vendor class identifier. The dhcpd service will see this value as vendor-class-identifier . The default value is anaconda-USD(uname -srm) . inst.waitfornet= Using the inst.waitfornet= SECONDS boot option causes the installation system to wait for network connectivity before installation. The value given in the SECONDS argument specifies maximum amount of time to wait for network connectivity before timing out and continuing the installation process even if network connectivity is not present. vlan= Sets up a Virtual LAN (VLAN) device on a specified interface with a given name. The syntax is vlan= name : interface . For example: The above will set up a VLAN device named vlan5 on the em1 interface. The name can take the following forms: Table 23.6. VLAN Device Naming Conventions Naming scheme Example VLAN_PLUS_VID vlan0005 VLAN_PLUS_VID_NO_PAD vlan5 DEV_PLUS_VID em1.0005 . DEV_PLUS_VID_NO_PAD em1.5 . bond= Set up a bonding device with the following syntax: bond= name [: slaves ][: options ] . Replace name with the bonding device name, slaves with a comma-separated list of physical (ethernet) interfaces, and options with a comma-separated list of bonding options. For example: For a list of available options, execute the modinfo bonding command. Using this option without any parameters will assume bond=bond0:eth0,eth1:mode=balance-rr . team= Set up a team device with the following syntax: team= master : slaves . Replace master with the name of the master team device and slaves with a comma-separated list of physical (ethernet) devices to be used as slaves in the team device. For example: Advanced Installation Options inst.kexec If this option is specified, the installer will use the kexec system call at the end of the installation, instead of performing a reboot. This loads the new system immediately, and bypasses the hardware initialization normally performed by the BIOS or firmware. Important Due to the complexities involved with booting systems using kexec , it cannot be explicitly tested and guaranteed to function in every situation. When kexec is used, device registers (which would normally be cleared during a full system reboot) might stay filled with data, which could potentially create issues for some device drivers. inst.gpt Force the installation program to install partition information into a GUID Partition Table (GPT) instead of a Master Boot Record (MBR). This option is meaningless on UEFI-based systems, unless they are in BIOS compatibility mode. Normally, BIOS-based systems and UEFI-based systems in BIOS compatibility mode will attempt to use the MBR schema for storing partitioning information, unless the disk is 2 32 sectors in size or larger. Most commonly, disk sectors are 512 bytes in size, meaning that this is usually equivalent to 2 TiB. Using this option will change this behavior, allowing a GPT to be written to disks smaller than this. See Section 8.14.1.1, "MBR and GPT Considerations" for more information about GPT and MBR, and Section A.1.4, "GUID Partition Table (GPT)" for more general information about GPT, MBR and disk partitioning in general. inst.multilib Configure the system for multilib packages (that is, to allow installing 32-bit packages on a 64-bit AMD64 or Intel 64 system) and install packages specified in this section as such. Normally, on an AMD64 or Intel 64 system, only packages for this architecture (marked as x86_64 ) and packages for all architectures (marked as noarch would be installed. When you use this option, packages for 32-bit AMD or Intel systems (marked as i686 ) will be automatically installed as well if available. This only applies to packages directly specified in the %packages section. If a package is only installed as a dependency, only the exact specified dependency will be installed. For example, if you are installing package bash which depends on package glibc , the former will be installed in multiple variants, while the latter will only be installed in variants specifically required. selinux=0 By default, SELinux operates in permissive mode in the installer, and in enforcing mode in the installed system. This option disables the use of SELinux in the installer and the installed system entirely. Note The selinux=0 and inst.selinux=0 options are not the same. The selinux=0 option disables the use of SELinux in the installer and the installed system, whereas inst.selinux=0 disables SELinux only in the installer. By default, SELinux is set to operate in permissive mode in the installer, so disabling it has little effect. inst.nosave= This option, introduced in Red Hat Enterprise Linux 7.3, controls which Kickstart files and installation logs are saved to the installed system. It can be especially useful to disable saving such data when performing OEM operating system installations, or when generating images using sensitive resources (such as internal repository URLs), as these resources might otherwise be mentioned in kickstart files, or in logs on the image, or both. Possible values for this option are: input_ks - disables saving of the input Kickstart file (if any). output_ks - disables saving of the output Kickstart file generated by Anaconda. all_ks - disables saving of both input and output Kickstart files. logs - disables saving of all installation logs. all - disables saving of all Kickstart files and all installation logs. Multiple values can be combined as a comma separated list, for example: input_ks,logs inst.zram This option controls the usage of zRAM swap during the installation. It creates a compressed block device inside the system RAM and uses it for swap space instead of the hard drive. This allows the installer to essentially increase the amount of memory available, which makes the installation faster on systems with low memory. By default, swap on zRAM is enabled on systems with 2 GiB or less RAM, and disabled on systems with more than 2 GiB of memory. You can use this option to change this behavior - on a system with more than 2 GiB RAM, use inst.zram=1 to enable it, and on systems with 2 GiB or less memory, use inst.zram=0 to disable this feature. Enabling Remote Access The following options are necessary to configure Anaconda for remote graphical installation. See Chapter 25, Using VNC for more details. inst.vnc Specifies that the installation program's graphical interface should be run in a VNC session. If you specify this option, you will need to connect to the system using a VNC client application to be able to interact with the installation program. VNC sharing is enabled, so multiple clients can connect to the system at the same time. Note A system installed using VNC will start in text mode by default. inst.vncpassword= Sets a password on the VNC server used by the installation program. Any VNC client attempting to connecting to the system will have to provide the correct password to gain access. For example, inst.vncpassword= testpwd will set the password to testpwd . The VNC password must be between 6 and 8 characters long. Note If you specify an invalid password (one that is too short or too long), you will be prompted to specify a new one by a message from the installation program: inst.vncconnect= Connect to a listening VNC client at a specified host and port once the installation starts. The correct syntax is inst.vncconnect= host : port , where host is the address to the VNC client's host, and port specifies which port to use. The port parameter is optional, if you do not specify one, the installation program will use 5900 . Debugging and Troubleshooting inst.updates= Specifies the location of the updates.img file to be applied to the installation program runtime. The syntax is the same as in the inst.repo option - see Table 23.1, "Installation Sources" for details. In all formats, if you do not specify a file name but only a directory, the installation program will look for a file named updates.img . inst.loglevel= Specifies the minimum level for messages to be logged on a terminal. This only concerns terminal logging; log files will always contain messages of all levels. Possible values for this option from the lowest to highest level are: debug , info , warning , error and critical . The default value is info , which means that by default, the logging terminal will display messages ranging from info to critical . inst.syslog= Once the installation starts, this option sends log messages to the syslog process on the specified host. The remote syslog process must be configured to accept incoming connections. For information on how to configure a syslog service to accept incoming connections, see the Red Hat Enterprise Linux 7 System Administrator's Guide . inst.virtiolog= Specifies a virtio port (a character device at /dev/virtio-ports/ name ) to be used for forwarding logs. The default value is org.fedoraproject.anaconda.log.0 ; if this port is present, it will be used. rd.live.ram If this option is specified, the stage 2 image will be copied into RAM. When the stage2 image on NFS repository is used, this option may make the installation proceed smoothly, since the installation is sometimes affected by reconfiguring network in an environment built upon the stage 2 image on NFS. Note that using this option when the stage 2 image is on an NFS server will increase the minimum required memory by the size of the image - roughly 500 MiB. inst.nokill A debugging option that prevents anaconda from and rebooting when a fatal error occurs or at the end of the installation process. This allows you to capture installation logs which would be lost upon reboot. 23.1.1. Deprecated and Removed Boot Options Deprecated Boot Options Options in this list are deprecated . They will still work, but there are other options which offer the same functionality. Using deprecated options is not recommended and they are expected to be removed in future releases. Note Note that as Section 23.1, "Configuring the Installation System at the Boot Menu" describes, options specific to the installation program now use the inst. prefix. For example, the vnc= option is considered deprecated and replaced by the inst.vnc= option. These changes are not listed here. method= Configured the installation method. Use the inst.repo= option instead. repo=nfsiso: server :/ path In NFS installations, specified that the target is an ISO image located on an NFS server instead of an installable tree. The difference is now detected automatically, which means this option is the same as inst.repo=nfs: server :/ path . dns= Configured the Domain Name Server (DNS). Use the nameserver= option instead. netmask= , gateway= , hostname= , ip= , ipv6= These options have been consolidated under the ip= option. ksdevice= Select network device to be used at early stage of installation. Different values have been replaced with different options; see the table below. Table 23.7. Automatic Interface Configuration Methods Value Current behavior Not present Activation of all devices is attempted using dhcp , unless the desired device and configuration is specified by the ip= option or the BOOTIF option. ksdevice=link Similar to the above, with the difference that network will always be activated in the initramfs, whether it is needed or not. The supported rd.neednet dracut option should be used to achieve the same result. ksdevice=bootif Ignored (the BOOTIF= option is used by default when specified) ksdevice=ibft Replaced with the ip=ibft dracut option ksdevice= MAC Replaced with BOOTIF= MAC ksdevice= device Replaced by specifying the device name using the ip= dracut option. blacklist= Used to disable specified drivers. This is now handled by the modprobe.blacklist= option. nofirewire= Disabled support for the FireWire interface. You can disable the FireWire driver ( firewire_ohci ) by using the modprobe.blacklist= option instead: nicdelay= Used to indicate the delay after which the network was considered active; the system waited until either the gateway was successfully pinged, or until the amount of seconds specified in this parameter passed. In RHEL 7, network devices are configured and activated during the early stage of installation by the dracut modules which ensure that the gateway is accessible before proceeding. For more information about dracut , see the dracut.cmdline(7) man page. linksleep= Used to configure how long anaconda should wait for a link on a device before activating it. This functionality is now available in the dracut modules where specific rd.net.timeout.* options can be configured to handle issues caused by slow network device initialization. For more information about dracut , see the dracut.cmdline(7) man page. Removed Boot Options The following options are removed. They were present in releases of Red Hat Enterprise Linux, but they cannot be used anymore. askmethod , asknetwork The installation program's initramfs is now completely non-interactive, which means that these options are not available anymore. Instead, use the inst.repo= to specify the installation method and ip= to configure network settings. serial This option forced Anaconda to use the /dev/ttyS0 console as the output. Use the console=/dev/ttyS0 (or similar) instead. updates= Specified the location of updates for the installation program. Use the inst.updates= option instead. essid= , wepkey= , wpakey= Configured wireless network access. Network configuration is now being handled by dracut , which does not support wireless networking, rendering these options useless. ethtool= Used in the past to configure additional low-level network settings. All network settings are now handled by the ip= option. gdb Allowed you to debug the loader. Use rd.debug instead. mediacheck Verified the installation media before starting the installation. Replaced with the rd.live.check option. ks=floppy Specified a 3.5 inch diskette as the Kickstart file source. These drives are not supported anymore. display= Configured a remote display. Replaced with the inst.vnc option. utf8 Added UTF8 support when installing in text mode. UTF8 support now works automatically. noipv6 Used to disable IPv6 support in the installation program. IPv6 is now built into the kernel so the driver cannot be blacklisted; however, it is possible to disable IPv6 using the ipv6.disable dracut option. upgradeany Upgrades are done in a different way in Red Hat Enterprise Linux 7. For more information about upgrading your system, see Chapter 29, Upgrading Your Current System . vlanid= Used to configure Virtual LAN (802.1q tag) devices. Use the vlan= dracut option instead. | [
"inst.repo=cdrom",
"inst.stage2=host1/install.img inst.stage2=host2/install.img inst.stage2=host3/install.img",
"inst.dd=/dev/sdb1",
"inst.dd=hd: LABEL = DD :/dd.rpm",
"inst.ks=host1/ directory /ks.cfg inst.ks=host2/ directory /ks.cfg inst.ks=host3/ directory /ks.cfg",
"inst.ks=nfs: next-server :/ filename",
"X-RHN-Provisioning-MAC-0: eth0 01:23:45:67:89:ab",
"X-System-Serial-Number: R8VA23D",
"modprobe.blacklist=ahci,firewire_ohci",
"ifname=eth0:01:23:45:67:89:ab",
"vlan=vlan5:em1",
"bond=bond0:em1,em2:mode=active-backup,tx_queues=32,downdelay=5000",
"team=team0:em1,em2",
"VNC password must be six to eight characters long. Please enter a new one, or leave blank for no password. Password:",
"modprobe.blacklist=firewire_ohci"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-anaconda-boot-options |
Chapter 17. Cache Stores | Chapter 17. Cache Stores The cache store connects Red Hat JBoss Data Grid to the persistent data store. Cache stores are associated with individual caches. Different caches attached to the same cache manager can have different cache store configurations. Note If a clustered cache is configured with an unshared cache store (where shared is set to false ), on node join, stale entries which might have been removed from the cluster might still be present in the stores and can reappear. Report a bug 17.1. Cache Loaders and Cache Writers Integration with the persistent store is done through the following SPIs located in org.infinispan.persistence.spi : CacheLoader CacheWriter AdvancedCacheLoader AdvancedCacheWriter CacheLoader and CacheWriter provide basic methods for reading and writing to a store. CacheLoader retrieves data from a data store when the required data is not present in the cache, and CacheWriter is used to enforce entry passivation and activation on eviction in a cache. AdvancedCacheLoader and AdvancedCacheWriter provide operations to manipulate the underlying storage in bulk: parallel iteration and purging of expired entries, clear and size. The org.infinispan.persistence.file.SingleFileStore is a good starting point to write your own store implementation. Note Previously, JBoss Data Grid used the old API ( CacheLoader , extended by CacheStore ), which is also still available. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-cache_stores |
Chapter 15. Subscription Management | Chapter 15. Subscription Management Red Hat Enterprise Linux 7 is available using the Red Hat Subscription Management services. The following Knowledge Base article provides a brief overview and instructions on how to register your Red Hat Enterprise Linux 7 system with Red Hat Subscription Management. Certificate-Based Entitlements Red Hat Enterprise Linux 7 supports new certificate-based entitlements through the subscription-manager tool. Legacy entitlements are also supported for Satellite users to provide a transition for users using Red Hat Enterprise Linux 5 and 6. Note that registering to Red Hat Network Classic using the rhn_register or rhnreg_ks tools will not work on Red Hat Enterprise Linux 7. You can use the mentioned tools to register to Red Hat Satellite or Proxy versions 5.6 only. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-subscription_management |
Serverless | Serverless OpenShift Container Platform 4.12 Create and deploy serverless, event-driven applications using OpenShift Serverless Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/serverless/index |
Chapter 8. Configuring APIcast for better performance | Chapter 8. Configuring APIcast for better performance This document provides general guidelines to debug performance issues in APIcast. It also introduces the available caching modes and explains how they can help in increasing performance, as well as details about profiling modes. The content is structured in the following sections: Section 8.1, "General guidelines" Section 8.2, "Default caching" Section 8.3, "Asynchronous reporting threads" Section 8.4, "3scale API Management Batcher policy" 8.1. General guidelines In a typical APIcast deployment, there are three components to consider: APIcast. The 3scale back-end server that authorizes requests and keeps track of the usage. The upstream API. When experiencing performance issues in APIcast: Identify the component that is responsible for the issues. Measure the latency of the upstream API, to determine the latency that APIcast plus the 3scale back-end server introduce. With the same tool you are using to run the benchmark, perform a new measurement but pointing to APIcast instead of pointing to the upstream API directly. Comparing these results will give you an idea of the latency introduced by APIcast and the 3scale back-end server. In a Hosted (SaaS) installation with self-managed APIcast, if the latency introduced by APIcast and the 3scale back-end server is high: Make a request to the 3scale back-end server from the same machine where APIcast is deployed. Measure the latency. The 3scale back-end server exposes an endpoint that returns the version: https://su1.3scale.net/status . In comparison, an authorization call requires more resources because it verifies keys, limits, and queue background jobs. Although the 3scale back-end server performs these tasks in a few milliseconds, it requires more work than checking the version like the /status endpoint does. As an example, if a request to /status takes around 300 ms from your APIcast environment, an authorization is going to take more time for every request that is not cached. 8.2. Default caching For requests that are not cached, these are the events: APIcast extracts the usage metrics from matching mapping rules. APIcast sends the metrics plus the application credentials to the 3scale back-end server. The 3scale back-end server performs the following: Checks the application keys, and that the reported usage of metrics is within the defined limits. Queues a background job to increase the usage of the metrics reported. Responds to APIcast whether the request should be authorized or not. If the request is authorized, it goes to the upstream. In this case, the request does not arrive to the upstream until the 3scale back-end server responds. On the other hand, with the caching mechanism that comes enabled by default: APIcast stores in a cache the result of the authorization call to the 3scale back-end server if it was authorized. The request with the same credentials and metrics will use that cached authorization instead of going to the 3scale back-end server. If the request was not authorized, or if it is the first time that APIcast receives the credentials, APIcast will call the 3scale back-end server synchronously as explained above. When the authentication is cached, APIcast first calls the upstream and then, in a phase called post action , it calls the 3scale back-end server and stores the authorization in the cache to have it ready for the request. Notice that the call to the 3scale back-end server does not introduce any latency because it does not happen in request time. However, requests sent in the same connection will need to wait until the post action phase finishes. Imagine a scenario where a client is using keep-alive and sends a request every second. If the upstream response time is 100 ms and the latency to the 3scale back-end server is 500 ms, the client will get the response every time in 100 ms. The total of upstream response and the reporting would take 600 ms. That gives extra 400 ms before the request comes. The diagram below illustrates the default caching behavior explained.The behavior of the caching mechanism can be changed using the caching policy . 8.3. Asynchronous reporting threads APIcast has a feature to enable a pool of threads that authorize against the 3scale back-end server. With this feature enabled, APIcast first synchronously calls the 3scale back-end server to verify the application and metrics matched by mapping rules. This is similar to when it uses the caching mechanism enabled by default. The difference is that subsequent calls to the 3scale back-end server are reported fully asynchronously as long as there are free reporting threads in the pool. Reporting threads are global for the whole gateway and shared between all the services. When a second TCP connection is made, it will also be fully asynchronous as long as the authorization is already cached. When there are no free reporting threads, the synchronous mode falls back to the standard asynchronous mode and does the reporting in the post action phase. You can enable this feature using the APICAST_REPORTING_THREADS environment variable. The diagram below illustrates how the asynchronous reporting thread pool works. 8.4. 3scale API Management Batcher policy Note If you need to increase cache size, use the variable APICAST_POLICY_BATCHER_SHARED_MEMORY_SIZE By default, APIcast performs one call to the 3scale back-end server for each request that it receives. The goal of the 3scale API Management Batcher policy is to reduce latency and increase throughput by significantly reducing the number of requests made to the 3scale back-end server. In order to achieve that, this policy caches authorization statuses and batches reports. See 3scale API Management Batcher policy for details. The diagram below illustrates how the policy works. | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/administering_the_api_gateway/apicast-performance |
Chapter 272. RabbitMQ Component | Chapter 272. RabbitMQ Component Available as of Camel version 2.12 The rabbitmq: component allows you produce and consume messages from RabbitMQ instances. Using the RabbitMQ AMQP client, this component offers a pure RabbitMQ approach over the generic AMQP component. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rabbitmq</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 272.1. URI format The old syntax is deprecated : rabbitmq://hostname[:port]/exchangeName?[options] Instead the hostname and port is configured on the component level, or can be provided as uri query parameters instead. The new syntax is: rabbitmq:exchangeName?[options] Where hostname is the hostname of the running rabbitmq instance or cluster. Port is optional and if not specified then defaults to the RabbitMQ client default (5672). The exchange name determines which exchange produced messages will sent to. In the case of consumers, the exchange name determines which exchange the queue will bind to. 272.2. Options The RabbitMQ component supports 50 options, which are listed below. Name Description Default Type hostname (common) The hostname of the running RabbitMQ instance or cluster. String portNumber (common) Port number for the host with the running rabbitmq instance or cluster. 5672 int username (security) Username in case of authenticated access guest String password (security) Password for authenticated access guest String vhost (common) The vhost for the channel / String addresses (common) If this option is set, camel-rabbitmq will try to create connection based on the setting of option addresses. The addresses value is a string which looks like server1:12345, server2:12345 String connectionFactory (common) To use a custom RabbitMQ connection factory. When this option is set, all connection options (connectionTimeout, requestedChannelMax... ) set on URI are not used ConnectionFactory threadPoolSize (consumer) The consumer uses a Thread Pool Executor with a fixed number of threads. This setting allows you to set that number of threads. 10 int autoDetectConnection Factory (advanced) Whether to auto-detect looking up RabbitMQ connection factory from the registry. When enabled and a single instance of the connection factory is found then it will be used. An explicit connection factory can be configured on the component or endpoint level which takes precedence. true boolean connectionTimeout (advanced) Connection timeout 60000 int requestedChannelMax (advanced) Connection requested channel max (max number of channels offered) 2047 int requestedFrameMax (advanced) Connection requested frame max (max size of frame offered) 0 int requestedHeartbeat (advanced) Connection requested heartbeat (heart-beat in seconds offered) 60 int automaticRecovery Enabled (advanced) Enables connection automatic recovery (uses connection implementation that performs automatic recovery when connection shutdown is not initiated by the application) Boolean networkRecoveryInterval (advanced) Network recovery interval in milliseconds (interval used when recovering from network failure) 5000 Integer topologyRecoveryEnabled (advanced) Enables connection topology recovery (should topology recovery be performed) Boolean prefetchEnabled (consumer) Enables the quality of service on the RabbitMQConsumer side. You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time false boolean prefetchSize (consumer) The maximum amount of content (measured in octets) that the server will deliver, 0 if unlimited. You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time int prefetchCount (consumer) The maximum number of messages that the server will deliver, 0 if unlimited. You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time int prefetchGlobal (consumer) If the settings should be applied to the entire channel rather than each consumer You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time false boolean channelPoolMaxSize (producer) Get maximum number of opened channel in pool 10 int channelPoolMaxWait (producer) Set the maximum number of milliseconds to wait for a channel from the pool 1000 long requestTimeout (advanced) Set timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds) 20000 long requestTimeoutChecker Interval (advanced) Set requestTimeoutCheckerInterval for inOut exchange 1000 long transferException (advanced) When true and an inOut Exchange failed on the consumer side send the caused Exception back in the response false boolean publisher Acknowledgements (producer) When true, the message will be published with publisher acknowledgements turned on false boolean publisher AcknowledgementsTimeout (producer) The amount of time in milliseconds to wait for a basic.ack response from RabbitMQ server long guaranteedDeliveries (producer) When true, an exception will be thrown when the message cannot be delivered (basic.return) and the message is marked as mandatory. PublisherAcknowledgement will also be activated in this case. See also publisher acknowledgements - When will messages be confirmed. false boolean mandatory (producer) This flag tells the server how to react if the message cannot be routed to a queue. If this flag is set, the server will return an unroutable message with a Return method. If this flag is zero, the server silently drops the message. If the header is present rabbitmq.MANDATORY it will override this option. false boolean immediate (producer) This flag tells the server how to react if the message cannot be routed to a queue consumer immediately. If this flag is set, the server will return an undeliverable message with a Return method. If this flag is zero, the server will queue the message, but with no guarantee that it will ever be consumed. If the header is present rabbitmq.IMMEDIATE it will override this option. false boolean args (advanced) Specify arguments for configuring the different RabbitMQ concepts, a different prefix is required for each: Exchange: arg.exchange. Queue: arg.queue. Binding: arg.binding. For example to declare a queue with message ttl argument: http://localhost:5672/exchange/queueargs=arg.queue.x-message-ttl=60000 Map clientProperties (advanced) Connection client properties (client info used in negotiating with the server) Map sslProtocol (security) Enables SSL on connection, accepted value are true, TLS and 'SSLv3 String trustManager (security) Configure SSL trust manager, SSL should be enabled for this option to be effective TrustManager autoAck (consumer) If messages should be auto acknowledged true boolean autoDelete (common) If it is true, the exchange will be deleted when it is no longer in use true boolean durable (common) If we are declaring a durable exchange (the exchange will survive a server restart) true boolean exclusive (common) Exclusive queues may only be accessed by the current connection, and are deleted when that connection closes. false boolean exclusiveConsumer (consumer) Request exclusive access to the queue (meaning only this consumer can access the queue). This is useful when you want a long-lived shared queue to be temporarily accessible by just one consumer. false boolean passive (common) Passive queues depend on the queue already to be available at RabbitMQ. false boolean skipQueueDeclare (common) If true the producer will not declare and bind a queue. This can be used for directing messages via an existing routing key. false boolean skipQueueBind (common) If true the queue will not be bound to the exchange after declaring it false boolean skipExchangeDeclare (common) This can be used if we need to declare the queue but not the exchange false boolean declare (common) If the option is true, camel declare the exchange and queue name and bind them together. If the option is false, camel won't declare the exchange and queue name on the server. true boolean deadLetterExchange (common) The name of the dead letter exchange String deadLetterQueue (common) The name of the dead letter queue String deadLetterRoutingKey (common) The routing key for the dead letter exchange String deadLetterExchangeType (common) The type of the dead letter exchange direct String allowNullHeaders (producer) Allow pass null values to header false boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The RabbitMQ endpoint is configured using URI syntax: with the following path and query parameters: 272.2.1. Path Parameters (1 parameters): Name Description Default Type exchangeName Required The exchange name determines which exchange produced messages will sent to. In the case of consumers, the exchange name determines which exchange the queue will bind to. String 272.2.2. Query Parameters (62 parameters): Name Description Default Type addresses (common) If this option is set, camel-rabbitmq will try to create connection based on the setting of option addresses. The addresses value is a string which looks like server1:12345, server2:12345 Address[] autoDelete (common) If it is true, the exchange will be deleted when it is no longer in use true boolean connectionFactory (common) To use a custom RabbitMQ connection factory. When this option is set, all connection options (connectionTimeout, requestedChannelMax... ) set on URI are not used ConnectionFactory deadLetterExchange (common) The name of the dead letter exchange String deadLetterExchangeType (common) The type of the dead letter exchange direct String deadLetterQueue (common) The name of the dead letter queue String deadLetterRoutingKey (common) The routing key for the dead letter exchange String declare (common) If the option is true, camel declare the exchange and queue name and bind them together. If the option is false, camel won't declare the exchange and queue name on the server. true boolean durable (common) If we are declaring a durable exchange (the exchange will survive a server restart) true boolean exchangeType (common) The exchange type such as direct or topic. direct String exclusive (common) Exclusive queues may only be accessed by the current connection, and are deleted when that connection closes. false boolean hostname (common) The hostname of the running rabbitmq instance or cluster. String passive (common) Passive queues depend on the queue already to be available at RabbitMQ. false boolean portNumber (common) Port number for the host with the running rabbitmq instance or cluster. Default value is 5672. int queue (common) The queue to receive messages from String routingKey (common) The routing key to use when binding a consumer queue to the exchange. For producer routing keys, you set the header rabbitmq.ROUTING_KEY. String skipExchangeDeclare (common) This can be used if we need to declare the queue but not the exchange false boolean skipQueueBind (common) If true the queue will not be bound to the exchange after declaring it false boolean skipQueueDeclare (common) If true the producer will not declare and bind a queue. This can be used for directing messages via an existing routing key. false boolean vhost (common) The vhost for the channel / String autoAck (consumer) If messages should be auto acknowledged true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer) Number of concurrent consumers when consuming from broker. (eg similar as to the same option for the JMS component). 1 int exclusiveConsumer (consumer) Request exclusive access to the queue (meaning only this consumer can access the queue). This is useful when you want a long-lived shared queue to be temporarily accessible by just one consumer. false boolean prefetchCount (consumer) The maximum number of messages that the server will deliver, 0 if unlimited. You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time int prefetchEnabled (consumer) Enables the quality of service on the RabbitMQConsumer side. You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time false boolean prefetchGlobal (consumer) If the settings should be applied to the entire channel rather than each consumer You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time false boolean prefetchSize (consumer) The maximum amount of content (measured in octets) that the server will deliver, 0 if unlimited. You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time int exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern threadPoolSize (consumer) The consumer uses a Thread Pool Executor with a fixed number of threads. This setting allows you to set that number of threads. 10 int allowNullHeaders (producer) Allow pass null values to header false boolean bridgeEndpoint (producer) If the bridgeEndpoint is true, the producer will ignore the message header of rabbitmq.EXCHANGE_NAME and rabbitmq.ROUTING_KEY false boolean channelPoolMaxSize (producer) Get maximum number of opened channel in pool 10 int channelPoolMaxWait (producer) Set the maximum number of milliseconds to wait for a channel from the pool 1000 long guaranteedDeliveries (producer) When true, an exception will be thrown when the message cannot be delivered (basic.return) and the message is marked as mandatory. PublisherAcknowledgement will also be activated in this case. See also publisher acknowledgements - When will messages be confirmed. false boolean immediate (producer) This flag tells the server how to react if the message cannot be routed to a queue consumer immediately. If this flag is set, the server will return an undeliverable message with a Return method. If this flag is zero, the server will queue the message, but with no guarantee that it will ever be consumed. If the header is present rabbitmq.IMMEDIATE it will override this option. false boolean mandatory (producer) This flag tells the server how to react if the message cannot be routed to a queue. If this flag is set, the server will return an unroutable message with a Return method. If this flag is zero, the server silently drops the message. If the header is present rabbitmq.MANDATORY it will override this option. false boolean publisherAcknowledgements (producer) When true, the message will be published with publisher acknowledgements turned on false boolean publisherAcknowledgements Timeout (producer) The amount of time in milliseconds to wait for a basic.ack response from RabbitMQ server long args (advanced) Specify arguments for configuring the different RabbitMQ concepts, a different prefix is required for each: Exchange: arg.exchange. Queue: arg.queue. Binding: arg.binding. For example to declare a queue with message ttl argument: http://localhost:5672/exchange/queueargs=arg.queue.x-message-ttl=60000 Map automaticRecoveryEnabled (advanced) Enables connection automatic recovery (uses connection implementation that performs automatic recovery when connection shutdown is not initiated by the application) Boolean bindingArgs (advanced) Deprecated Key/value args for configuring the queue binding parameters when declare=true Map clientProperties (advanced) Connection client properties (client info used in negotiating with the server) Map connectionTimeout (advanced) Connection timeout 60000 int exchangeArgs (advanced) Deprecated Key/value args for configuring the exchange parameters when declare=true Map exchangeArgsConfigurer (advanced) Deprecated Set the configurer for setting the exchange args in Channel.exchangeDeclare ArgsConfigurer networkRecoveryInterval (advanced) Network recovery interval in milliseconds (interval used when recovering from network failure) 5000 Integer queueArgs (advanced) Deprecated Key/value args for configuring the queue parameters when declare=true Map queueArgsConfigurer (advanced) Deprecated Set the configurer for setting the queue args in Channel.queueDeclare ArgsConfigurer requestedChannelMax (advanced) Connection requested channel max (max number of channels offered) 2047 int requestedFrameMax (advanced) Connection requested frame max (max size of frame offered) 0 int requestedHeartbeat (advanced) Connection requested heartbeat (heart-beat in seconds offered) 60 int requestTimeout (advanced) Set timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds) 20000 long requestTimeoutChecker Interval (advanced) Set requestTimeoutCheckerInterval for inOut exchange 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean topologyRecoveryEnabled (advanced) Enables connection topology recovery (should topology recovery be performed) Boolean transferException (advanced) When true and an inOut Exchange failed on the consumer side send the caused Exception back in the response false boolean password (security) Password for authenticated access guest String sslProtocol (security) Enables SSL on connection, accepted value are true, TLS and 'SSLv3 String trustManager (security) Configure SSL trust manager, SSL should be enabled for this option to be effective TrustManager username (security) Username in case of authenticated access guest String 272.3. Spring Boot Auto-Configuration The component supports 51 options, which are listed below. Name Description Default Type camel.component.rabbitmq.addresses If this option is set, camel-rabbitmq will try to create connection based on the setting of option addresses. The addresses value is a string which looks like server1:12345, server2:12345 String camel.component.rabbitmq.allow-null-headers Allow pass null values to header false Boolean camel.component.rabbitmq.args Specify arguments for configuring the different RabbitMQ concepts, a different prefix is required for each: Exchange: arg.exchange. Queue: arg.queue. Binding: arg.binding. For example to declare a queue with message ttl argument: http://localhost:5672/exchange/queueargs=arg.queue.x-message-ttl=60000 Map camel.component.rabbitmq.auto-ack If messages should be auto acknowledged true Boolean camel.component.rabbitmq.auto-delete If it is true, the exchange will be deleted when it is no longer in use true Boolean camel.component.rabbitmq.auto-detect-connection-factory Whether to auto-detect looking up RabbitMQ connection factory from the registry. When enabled and a single instance of the connection factory is found then it will be used. An explicit connection factory can be configured on the component or endpoint level which takes precedence. true Boolean camel.component.rabbitmq.automatic-recovery-enabled Enables connection automatic recovery (uses connection implementation that performs automatic recovery when connection shutdown is not initiated by the application) Boolean camel.component.rabbitmq.channel-pool-max-size Get maximum number of opened channel in pool 10 Integer camel.component.rabbitmq.channel-pool-max-wait Set the maximum number of milliseconds to wait for a channel from the pool 1000 Long camel.component.rabbitmq.client-properties Connection client properties (client info used in negotiating with the server) Map camel.component.rabbitmq.connection-factory To use a custom RabbitMQ connection factory. When this option is set, all connection options (connectionTimeout, requestedChannelMax... ) set on URI are not used. The option is a com.rabbitmq.client.ConnectionFactory type. String camel.component.rabbitmq.connection-timeout Connection timeout 60000 Integer camel.component.rabbitmq.dead-letter-exchange The name of the dead letter exchange String camel.component.rabbitmq.dead-letter-exchange-type The type of the dead letter exchange direct String camel.component.rabbitmq.dead-letter-queue The name of the dead letter queue String camel.component.rabbitmq.dead-letter-routing-key The routing key for the dead letter exchange String camel.component.rabbitmq.declare If the option is true, camel declare the exchange and queue name and bind them together. If the option is false, camel won't declare the exchange and queue name on the server. true Boolean camel.component.rabbitmq.durable If we are declaring a durable exchange (the exchange will survive a server restart) true Boolean camel.component.rabbitmq.enabled Enable rabbitmq component true Boolean camel.component.rabbitmq.exclusive Exclusive queues may only be accessed by the current connection, and are deleted when that connection closes. false Boolean camel.component.rabbitmq.exclusive-consumer Request exclusive access to the queue (meaning only this consumer can access the queue). This is useful when you want a long-lived shared queue to be temporarily accessible by just one consumer. false Boolean camel.component.rabbitmq.guaranteed-deliveries When true, an exception will be thrown when the message cannot be delivered (basic.return) and the message is marked as mandatory. PublisherAcknowledgement will also be activated in this case. See also publisher acknowledgements - When will messages be confirmed. false Boolean camel.component.rabbitmq.hostname The hostname of the running RabbitMQ instance or cluster. String camel.component.rabbitmq.immediate This flag tells the server how to react if the message cannot be routed to a queue consumer immediately. If this flag is set, the server will return an undeliverable message with a Return method. If this flag is zero, the server will queue the message, but with no guarantee that it will ever be consumed. If the header is present rabbitmq.IMMEDIATE it will override this option. false Boolean camel.component.rabbitmq.mandatory This flag tells the server how to react if the message cannot be routed to a queue. If this flag is set, the server will return an unroutable message with a Return method. If this flag is zero, the server silently drops the message. If the header is present rabbitmq.MANDATORY it will override this option. false Boolean camel.component.rabbitmq.network-recovery-interval Network recovery interval in milliseconds (interval used when recovering from network failure) 5000 Integer camel.component.rabbitmq.passive Passive queues depend on the queue already to be available at RabbitMQ. false Boolean camel.component.rabbitmq.password Password for authenticated access guest String camel.component.rabbitmq.port-number Port number for the host with the running rabbitmq instance or cluster. 5672 Integer camel.component.rabbitmq.prefetch-count The maximum number of messages that the server will deliver, 0 if unlimited. You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time Integer camel.component.rabbitmq.prefetch-enabled Enables the quality of service on the RabbitMQConsumer side. You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time false Boolean camel.component.rabbitmq.prefetch-global If the settings should be applied to the entire channel rather than each consumer You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time false Boolean camel.component.rabbitmq.prefetch-size The maximum amount of content (measured in octets) that the server will deliver, 0 if unlimited. You need to specify the option of prefetchSize, prefetchCount, prefetchGlobal at the same time Integer camel.component.rabbitmq.publisher-acknowledgements When true, the message will be published with publisher acknowledgements turned on false Boolean camel.component.rabbitmq.publisher-acknowledgements-timeout The amount of time in milliseconds to wait for a basic.ack response from RabbitMQ server Long camel.component.rabbitmq.request-timeout Set timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds) 20000 Long camel.component.rabbitmq.request-timeout-checker-interval Set requestTimeoutCheckerInterval for inOut exchange 1000 Long camel.component.rabbitmq.requested-channel-max Connection requested channel max (max number of channels offered) 2047 Integer camel.component.rabbitmq.requested-frame-max Connection requested frame max (max size of frame offered) 0 Integer camel.component.rabbitmq.requested-heartbeat Connection requested heartbeat (heart-beat in seconds offered) 60 Integer camel.component.rabbitmq.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.rabbitmq.skip-exchange-declare This can be used if we need to declare the queue but not the exchange false Boolean camel.component.rabbitmq.skip-queue-bind If true the queue will not be bound to the exchange after declaring it false Boolean camel.component.rabbitmq.skip-queue-declare If true the producer will not declare and bind a queue. This can be used for directing messages via an existing routing key. false Boolean camel.component.rabbitmq.ssl-protocol Enables SSL on connection, accepted value are true, TLS and 'SSLv3 String camel.component.rabbitmq.thread-pool-size The consumer uses a Thread Pool Executor with a fixed number of threads. This setting allows you to set that number of threads. 10 Integer camel.component.rabbitmq.topology-recovery-enabled Enables connection topology recovery (should topology recovery be performed) Boolean camel.component.rabbitmq.transfer-exception When true and an inOut Exchange failed on the consumer side send the caused Exception back in the response false Boolean camel.component.rabbitmq.trust-manager Configure SSL trust manager, SSL should be enabled for this option to be effective. The option is a javax.net.ssl.TrustManager type. String camel.component.rabbitmq.username Username in case of authenticated access guest String camel.component.rabbitmq.vhost The vhost for the channel / String See http://www.rabbitmq.com/releases/rabbitmq-java-client/current-javadoc/com/rabbitmq/client/ConnectionFactory.html and the AMQP specification for more information on connection options. 272.4. Using connection factory To connect to RabbitMQ you can setup a ConnectionFactory (same as with JMS) with the login details such as: <bean id="rabbitConnectionFactory" class="com.rabbitmq.client.ConnectionFactory"> <property name="host" value="localhost"/> <property name="port" value="5672"/> <property name="username" value="camel"/> <property name="password" value="bugsbunny"/> </bean> And then refer to the connection factory in the endpoint uri as shown below: <camelContext> <route> <from uri="direct:rabbitMQEx2"/> <to uri="rabbitmq:ex2?connectionFactory=#rabbitConnectionFactory"/> </route> </camelContext> From Camel 2.21 onwards the ConnectionFactory is auto-detected by default, so you can just do <camelContext> <route> <from uri="direct:rabbitMQEx2"/> <to uri="rabbitmq:ex2"/> </route> </camelContext> 272.5. Message Headers The following headers are set on exchanges when consuming messages. Property Value rabbitmq.ROUTING_KEY The routing key that was used to receive the message, or the routing key that will be used when producing a message rabbitmq.EXCHANGE_NAME The exchange the message was received from rabbitmq.DELIVERY_TAG The rabbitmq delivery tag of the received message rabbitmq.REDELIVERY_TAG Whether the message is a redelivered rabbitmq.REQUEUE Camel 2.14.2: This is used by the consumer to control rejection of the message. When the consumer is complete processing the exchange, and if the exchange failed, then the consumer is going to reject the message from the RabbitMQ broker. The value of this header controls this behavior. If the value is false (by default) then the message is discarded/dead-lettered. If the value is true, then the message is re-queued. The following headers are used by the producer. If these are set on the camel exchange then they will be set on the RabbitMQ message. Property Value rabbitmq.ROUTING_KEY The routing key that will be used when sending the message rabbitmq.EXCHANGE_NAME The exchange the message was received from rabbitmq.EXCHANGE_OVERRIDE_NAME Camel 2.21: Used for force sending the message to this exchange instead of the endpoint configured name on the producer rabbitmq.CONTENT_TYPE The contentType to set on the RabbitMQ message rabbitmq.PRIORITY The priority header to set on the RabbitMQ message rabbitmq.CORRELATIONID The correlationId to set on the RabbitMQ message rabbitmq.MESSAGE_ID The message id to set on the RabbitMQ message rabbitmq.DELIVERY_MODE If the message should be persistent or not rabbitmq.USERID The userId to set on the RabbitMQ message rabbitmq.CLUSTERID The clusterId to set on the RabbitMQ message rabbitmq.REPLY_TO The replyTo to set on the RabbitMQ message rabbitmq.CONTENT_ENCODING The contentEncoding to set on the RabbitMQ message rabbitmq.TYPE The type to set on the RabbitMQ message rabbitmq.EXPIRATION The expiration to set on the RabbitMQ message rabbitmq.TIMESTAMP The timestamp to set on the RabbitMQ message rabbitmq.APP_ID The appId to set on the RabbitMQ message Headers are set by the consumer once the message is received. The producer will also set the headers for downstream processors once the exchange has taken place. Any headers set prior to production that the producer sets will be overriden. 272.6. Message Body The component will use the camel exchange in body as the rabbit mq message body. The camel exchange in object must be convertible to a byte array. Otherwise the producer will throw an exception of unsupported body type. 272.7. Samples To receive messages from a queue that is bound to an exchange A with the routing key B, from("rabbitmq:A?routingKey=B") To receive messages from a queue with a single thread with auto acknowledge disabled. from("rabbitmq:A?routingKey=B&threadPoolSize=1&autoAck=false") To send messages to an exchange called C to("rabbitmq:C") Declaring a headers exchange and queue from("rabbitmq:ex?exchangeType=headers&queue=q&bindingArgs=#bindArgs") and place corresponding Map<String, Object> with the id of "bindArgs" in the Registry. For example declaring a method in spring @Bean(name="bindArgs") public Map<String, Object> bindArgsBuilder() { return Collections.singletonMap("foo", "bar"); } 272.7.1. Issue when routing between exchanges (in Camel 2.20.x or older) If you for example want to route messages from one Rabbit exchange to another as shown in the example below with foo bar: from("rabbitmq:foo") .to("rabbitmq:bar") Then beware that Camel will route the message to itself, eg foo foo. So why is that? This is because the consumer that receives the message (eg from) provides the message header rabbitmq.EXCHANGE_NAME with the name of the exchange, eg foo . And when the Camel producer is sending the message to bar then the header rabbitmq.EXCHANGE_NAME will override this and instead send the message to foo . To avoid this you need to either: Remove the header: from("rabbitmq:foo") .removeHeader("rabbitmq.EXCHANGE_NAME") .to("rabbitmq:bar") Or turn on bridgeEndpoint mode on the producer: from("rabbitmq:foo") .to("rabbitmq:bar?bridgeEndpoint=true") From Camel 2.21 onwards this has been improved so you can easily route between exchanges. The header rabbitmq.EXCHANGE_NAME is not longer used by the producer to override the destination exchange. Instead a new header rabbitmq.EXCHANGE_OVERRIDE_NAME can be used to send to a different exchange. For example to send to cheese exchange you can do from("rabbitmq:foo") .setHeader("rabbitmq.EXCHANGE_OVERRIDE_NAME", constant("cheese")) .to("rabbitmq:bar") | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rabbitmq</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"rabbitmq://hostname[:port]/exchangeName?[options]",
"rabbitmq:exchangeName?[options]",
"rabbitmq:exchangeName",
"<bean id=\"rabbitConnectionFactory\" class=\"com.rabbitmq.client.ConnectionFactory\"> <property name=\"host\" value=\"localhost\"/> <property name=\"port\" value=\"5672\"/> <property name=\"username\" value=\"camel\"/> <property name=\"password\" value=\"bugsbunny\"/> </bean> And then refer to the connection factory in the endpoint uri as shown below: <camelContext> <route> <from uri=\"direct:rabbitMQEx2\"/> <to uri=\"rabbitmq:ex2?connectionFactory=#rabbitConnectionFactory\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:rabbitMQEx2\"/> <to uri=\"rabbitmq:ex2\"/> </route> </camelContext>",
"from(\"rabbitmq:A?routingKey=B\")",
"from(\"rabbitmq:A?routingKey=B&threadPoolSize=1&autoAck=false\")",
"to(\"rabbitmq:C\")",
"from(\"rabbitmq:ex?exchangeType=headers&queue=q&bindingArgs=#bindArgs\")",
"@Bean(name=\"bindArgs\") public Map<String, Object> bindArgsBuilder() { return Collections.singletonMap(\"foo\", \"bar\"); }",
"from(\"rabbitmq:foo\") .to(\"rabbitmq:bar\")",
"from(\"rabbitmq:foo\") .removeHeader(\"rabbitmq.EXCHANGE_NAME\") .to(\"rabbitmq:bar\")",
"from(\"rabbitmq:foo\") .to(\"rabbitmq:bar?bridgeEndpoint=true\")",
"from(\"rabbitmq:foo\") .setHeader(\"rabbitmq.EXCHANGE_OVERRIDE_NAME\", constant(\"cheese\")) .to(\"rabbitmq:bar\")"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/rabbitmq-component |
Chapter 1. Backing up the undercloud node | Chapter 1. Backing up the undercloud node To back up the undercloud node, you configure the backup node, install the Relax-and-Recover tool on the undercloud node, and create the backup image. You can create backups as a part of your regular environment maintenance. In addition, you must back up the undercloud node before performing updates or upgrades. You can use the backups to restore the undercloud node to its state if an error occurs during an update or upgrade. 1.1. Supported backup formats and protocols The undercloud and backup and restore process uses the open-source tool Relax-and-Recover (ReaR) to create and restore bootable backup images. ReaR is written in Bash and supports multiple image formats and multiple transport protocols. The following list shows the backup formats and protocols that Red Hat OpenStack Platform supports when you use ReaR to back up and restore the undercloud and control plane. Bootable media formats ISO File transport protocols SFTP NFS 1.2. Configuring the backup storage location Before you create a backup of the control plane nodes, configure the backup storage location in the bar-vars.yaml environment file. This file stores the key-value parameters that you want to pass to the backup execution. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Create the bar-vars.yaml file: touch /home/stack/bar-vars.yaml In the bar-vars.yaml file, configure the backup storage location: If you use an NFS server, add the following parameters and set the values of the IP address of your NFS server and backup storage folder: tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_dir> By default, the tripleo_backup_and_restore_server parameter value is 192.168.24.1 . If you use an SFTP server, add the tripleo_backup_and_restore_output_url parameter and set the values of the URL and credentials of the SFTP server: tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/ tripleo_backup_and_restore_backup_url: iso:///backup/ Replace <user> , <password> , and <backup_node> with the backup node URL and credentials. 1.3. Optional: Configuring backup encryption You can encrypt backups as an additional security measure to protect sensitive data. Procedure In the bar-vars.yaml file, add the following parameters: tripleo_backup_and_restore_crypt_backup_enabled: true tripleo_backup_and_restore_crypt_backup_password: <password> Replace <password> with the password you want to use to encrypt the backup. 1.4. Installing and configuring an NFS server on the backup node You can install and configure a new NFS server to store the backup file. To install and configure an NFS server on the backup node, create an inventory file, create an SSH key, and run the openstack undercloud backup command with the NFS server options. Important If you previously installed and configured an NFS or SFTP server, you do not need to complete this procedure. You enter the server information when you set up ReaR on the node that you want to back up. By default, the Relax and Recover (ReaR) IP address parameter for the NFS server is 192.168.24.1 . You must add the parameter tripleo_backup_and_restore_server to set the IP address value that matches your environment. Procedure On the undercloud node, source the undercloud credentials: On the undercloud node, create an inventory file for the backup node: (undercloud) [stack@undercloud ~]USD cat <<'EOF'> ~/nfs-inventory.yaml [BackupNode] <backup_node> ansible_host=<ip_address> ansible_user=<user> EOF Replace <ip_address> and <user> with the values that apply to your environment. Copy the public SSH key from the undercloud node to the backup node. (undercloud) [stack@undercloud ~]USD ssh-copy-id -i ~/.ssh/id_rsa.pub <backup_node> Replace <backup_node> with the path and name of the backup node. Configure the NFS server on the backup node: (undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-nfs --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/nfs-inventory.yaml 1.5. Installing ReaR on the undercloud node Before you create a backup of the undercloud node, install and configure Relax and Recover (ReaR) on the undercloud. Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 1.4, "Installing and configuring an NFS server on the backup node" . Procedure On the undercloud node, source the undercloud credentials: [stack@undercloud-0 ~]USD source stackrc If you use a custom stack name, add the --stack <stack_name> option to the tripleo-ansible-inventory command. If you have not done so before, extract the static ansible inventory file from the location in which it was saved during installation: (undercloud) [stack@undercloud ~]USD cp ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml ~/tripleo-inventory.yaml Replace <stack> with the name of your stack. By default, the name of the stack is overcloud . Install ReaR on the undercloud node: (undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-rear --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml If your system uses the UEFI boot loader, perform the following steps on the undercloud node: Install the following tools: USD sudo dnf install dosfstools efibootmgr Enable UEFI backup in the ReaR configuration file located in /etc/rear/local.conf by replacing the USING_UEFI_BOOTLOADER parameter value 0 with the value 1 . 1.6. Creating a standalone database backup of the undercloud nodes You can include standalone undercloud database backups in your routine backup schedule to provide additional data security. A full backup of an undercloud node includes a database backup of the undercloud node. But if a full undercloud restoration fails, you might lose access to the database portion of the full undercloud backup. In this case, you can recover the database from a standalone undercloud database backup. Procedure Create a database backup of the undercloud nodes: openstack undercloud backup --db-only The db backup file is stored in /home/stack with the name openstack-backup-mysql-<timestamp>.sql . Additional resources Section 1.8, "Creating a backup of the undercloud node" Section 3.5, "Restoring the undercloud node database manually" 1.7. Configuring Open vSwitch (OVS) interfaces for backup If you use an Open vSwitch (OVS) bridge in your environment, you must manually configure the OVS interfaces before you create a backup of the undercloud or control plane nodes. The restoration process uses this information to restore the network interfaces. Procedure In the /etc/rear/local.conf file, add the NETWORKING_PREPARATION_COMMANDS parameter in the following format: Replace <command_1> and <command_2> with commands that configure the network interface names or IP addresses. For example, you can add the ip link add br-ctlplane type bridge command to configure the control plane bridge name or add the ip link set eth0 up command to set the name of the interface. You can add more commands to the parameter based on your network configuration. 1.8. Creating a backup of the undercloud node To create a backup of the undercloud node, use the openstack undercloud backup command. You can then use the backup to restore the undercloud node to its state in case the node becomes corrupted or inaccessible. The backup of the undercloud node includes the backup of the database that runs on the undercloud node. Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 1.4, "Installing and configuring an NFS server on the backup node" . You have installed ReaR on the undercloud node. For more information, see Section 1.5, "Installing ReaR on the undercloud node" . If you use an OVS bridge for your network interfaces, you have configured the OVS interfaces. For more information, see Section 1.7, "Configuring Open vSwitch (OVS) interfaces for backup" . Procedure Log in to the undercloud as the stack user. Retrieve the MySQL root password: [stack@undercloud ~]USD PASSWORD=USD(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password) Create a database backup of the undercloud node: [stack@undercloud ~]USD sudo podman exec mysql bash -c "mysqldump -uroot -pUSDPASSWORD --opt --all-databases" | sudo tee /root/undercloud-all-databases.sql On the undercloud node, source the undercloud credentials: [stack@undercloud-0 ~]USD source stackrc If you have not done so before, create an inventory file and use the tripleo-ansible-inventory command to generate a static inventory file that contains hosts and variables for all the overcloud nodes: (undercloud) [stack@undercloud ~]USD tripleo-ansible-inventory \ --ansible_ssh_user tripleo-admin \ --static-yaml-inventory /home/stack/tripleo-inventory.yaml Create a backup of the undercloud node: (undercloud) [stack@undercloud ~]USD openstack undercloud backup --inventory /home/stack/tripleo-inventory.yaml 1.9. Scheduling undercloud node backups with cron You can schedule backups of the undercloud nodes with ReaR by using the Ansible backup-and-restore role. You can view the logs in the /var/log/rear-cron directory. Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 1.4, "Installing and configuring an NFS server on the backup node" . You have installed ReaR on the undercloud and control plane nodes. For more information, see Section 2.4, "Installing ReaR on the control plane nodes" . You have sufficient available disk space at your backup location to store the backup. Procedure To schedule a backup of your control plane nodes, run the following command. The default schedule is Sundays at midnight: openstack undercloud backup --cron Optional: Customize the scheduled backup according to your deployment: To change the default backup schedule, pass a different cron schedule on the tripleo_backup_and_restore_cron parameter: openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron": "0 0 * * 0"}' To define additional parameters that are added to the backup command when cron runs the scheduled backup, pass the tripleo_backup_and_restore_cron_extra parameter to the backup command, as shown in the following example: openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_extra":"--extra-vars bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml"}' To change the default user that executes the backup, pass the tripleo_backup_and_restore_cron_user parameter to the backup command, as shown in the following example: openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_user": "root"} | [
"source ~/stackrc",
"touch /home/stack/bar-vars.yaml",
"tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_dir>",
"tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/ tripleo_backup_and_restore_backup_url: iso:///backup/",
"tripleo_backup_and_restore_crypt_backup_enabled: true tripleo_backup_and_restore_crypt_backup_password: <password>",
"[stack@undercloud-0 ~]USD source stackrc (undercloud) [stack@undercloud ~]USD",
"(undercloud) [stack@undercloud ~]USD cat <<'EOF'> ~/nfs-inventory.yaml [BackupNode] <backup_node> ansible_host=<ip_address> ansible_user=<user> EOF",
"(undercloud) [stack@undercloud ~]USD ssh-copy-id -i ~/.ssh/id_rsa.pub <backup_node>",
"(undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-nfs --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/nfs-inventory.yaml",
"[stack@undercloud-0 ~]USD source stackrc",
"(undercloud) [stack@undercloud ~]USD cp ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml ~/tripleo-inventory.yaml",
"(undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-rear --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml",
"sudo dnf install dosfstools efibootmgr",
"openstack undercloud backup --db-only",
"NETWORKING_PREPARATION_COMMANDS=('<command_1>' '<command_2>' ...')",
"[stack@undercloud ~]USD PASSWORD=USD(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)",
"[stack@undercloud ~]USD sudo podman exec mysql bash -c \"mysqldump -uroot -pUSDPASSWORD --opt --all-databases\" | sudo tee /root/undercloud-all-databases.sql",
"[stack@undercloud-0 ~]USD source stackrc",
"(undercloud) [stack@undercloud ~]USD tripleo-ansible-inventory --ansible_ssh_user tripleo-admin --static-yaml-inventory /home/stack/tripleo-inventory.yaml",
"(undercloud) [stack@undercloud ~]USD openstack undercloud backup --inventory /home/stack/tripleo-inventory.yaml",
"openstack undercloud backup --cron",
"openstack undercloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron\": \"0 0 * * 0\"}'",
"openstack undercloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron_extra\":\"--extra-vars bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml\"}'",
"openstack undercloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron_user\": \"root\"}"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/backing_up_and_restoring_the_undercloud_and_control_plane_nodes/assembly_backing-up-the-undercloud-node_br-undercloud-ctlplane |
Chapter 3. Deploying a Red Hat Enterprise Linux image as an EC2 instance on Amazon Web Services | Chapter 3. Deploying a Red Hat Enterprise Linux image as an EC2 instance on Amazon Web Services To set up a High Availability (HA) deployment of RHEL on Amazon Web Services (AWS), you can deploy EC2 instances of RHEL to a cluster on AWS. Important While you can create a custom VM from an ISO image, Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. With Image Builder, you can create and upload an Amazon Machine Image (AMI) in the ami format. See Composing a Customized RHEL System Image for more information. Note For a list of Red Hat products that you can use securely on AWS, see Red Hat on Amazon Web Services . Prerequisites Sign up for a Red Hat Customer Portal account. Sign up for AWS and set up your AWS resources. See Setting Up with Amazon EC2 for more information. 3.1. Red Hat Enterprise Linux Image options on AWS The following table lists image choices and notes the differences in the image options. Table 3.1. Image options Image option Subscriptions Sample scenario Considerations Deploy a Red Hat Gold Image. Use your existing Red Hat subscriptions. Select a Red Hat Gold Image on AWS. For details on Gold Images and how to access them on Azure, see the Red Hat Cloud Access Reference Guide . The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for Cloud Access images. Deploy a custom image that you move to AWS. Use your existing Red Hat subscriptions. Upload your custom image, and attach your subscriptions. The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for custom RHEL images. Deploy an existing Amazon image that includes RHEL. The AWS EC2 images include a Red Hat product. Select a RHEL image when you launch an instance on the AWS Management Console , or choose an image from the AWS Marketplace . You pay Amazon hourly on a pay-as-you-go model. Such images are called "on-demand" images. Amazon provides support for on-demand images. Red Hat provides updates to the images. AWS makes the updates available through the Red Hat Update Infrastructure (RHUI). Note You can create a custom image for AWS by using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information. Important You cannot convert an on-demand instance to a custom RHEL instance. To change from an on-demand image to a custom RHEL bring-your-own-subscription (BYOS) image: Create a new custom RHEL instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing. Additional resources Composing a Customized RHEL System Image AWS Management Console AWS Marketplace 3.2. Understanding base images To create a base VM from an ISO image, you can use preconfigured base images and their configuration settings. 3.2.1. Using a custom base image To manually configure a virtual machine (VM), first create a base (starter) VM image. Then, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image. Additional resources Red Hat Enterprise Linux 3.2.2. Virtual machine configuration settings Cloud VMs must have the following configuration settings. Table 3.2. VM configuration settings Setting Recommendation ssh ssh must be enabled to provide remote access to your VMs. dhcp The primary virtual adapter should be configured for dhcp. 3.3. Creating a base VM from an ISO image To create a RHEL 8 base image from an ISO image, enable your host machine for virtualization and create a RHEL virtual machine (VM). Prerequisites Virtualization is enabled on your host machine. You have downloaded the latest Red Hat Enterprise Linux ISO image from the Red Hat Customer Portal and moved the image to /var/lib/libvirt/images . 3.3.1. Creating a VM from the RHEL ISO image Procedure Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures. Create and start a basic Red Hat Enterprise Linux VM. For instructions, see Creating virtual machines . If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio . For example, the following command creates a kvmtest VM by using the /home/username/Downloads/rhel8.iso image: If you use the web console to create your VM, follow the procedure in Creating virtual machines by using the web console , with these caveats: Do not check Immediately Start VM . Change your Memory size to your preferred settings. Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM. 3.3.2. Completing the RHEL installation To finish the installation of a RHEL system that you want to deploy on Amazon Web Services (AWS), customize the Installation Summary view, begin the installation, and enable root access once the VM launches. Procedure Choose the language you want to use during the installation process. On the Installation Summary view: Click Software Selection and check Minimal Install . Click Done . Click Installation Destination and check Custom under Storage Configuration . Verify at least 500 MB for /boot . You can use the remaining space for root / . Standard partitions are recommended, but you can use Logical Volume Manager (LVM). You can use xfs, ext4, or ext3 for the file system. Click Done when you are finished with changes. Click Begin Installation . Set a Root Password . Create other users as applicable. Reboot the VM and log in as root once the installation completes. Configure the image. Register the VM and enable the Red Hat Enterprise Linux 8 repository. Ensure that the cloud-init package is installed and enabled. Important: This step is only for VMs you intend to upload to AWS. For AMD64 or Intel 64 (x86_64)VMs, install the nvme , xen-netfront , and xen-blkfront drivers. For ARM 64 (aarch64) VMs, install the nvme driver. Including these drivers removes the possibility of a dracut time-out. Alternatively, you can add the drivers to /etc/dracut.conf.d/ and then enter dracut -f to overwrite the existing initramfs file. Power down the VM. Additional resources Introduction to cloud-init 3.4. Uploading the Red Hat Enterprise Linux image to AWS To be able to run a RHEL instance on Amazon Web Services (AWS), you must first upload your RHEL image to AWS. 3.4.1. Installing the AWS CLI Many of the procedures required to manage HA clusters in AWS include using the AWS CLI. Prerequisites You have created an AWS Access Key ID and an AWS Secret Access Key, and have access to them. For instructions and details, see Quickly Configuring the AWS CLI . Procedure Install the AWS command line tools by using the yum command. Use the aws --version command to verify that you installed the AWS CLI. Configure the AWS command line client according to your AWS access details. Additional resources Quickly Configuring the AWS CLI AWS command line tools 3.4.2. Creating an S3 bucket Importing to AWS requires an Amazon S3 bucket. An Amazon S3 bucket is an Amazon resource where you store objects. As part of the process for uploading your image, you need to create an S3 bucket and then move your image to the bucket. Procedure Launch the Amazon S3 Console . Click Create Bucket . The Create Bucket dialog appears. In the Name and region view: Enter a Bucket name . Enter a Region . Click . In the Configure options view, select the desired options and click . In the Set permissions view, change or accept the default options and click . Review your bucket configuration. Click Create bucket . Note Alternatively, you can use the AWS CLI to create a bucket. For example, the aws s3 mb s3://my-new-bucket command creates an S3 bucket named my-new-bucket . See the AWS CLI Command Reference for more information about the mb command. Additional resources Amazon S3 Console AWS CLI Command Reference 3.4.3. Creating the vmimport role To be able to import a RHEL virtual machine (VM) to Amazon Web Services (AWS) by using the VM Import service, you need to create the vmimport role. For more information, see Importing a VM as an image using VM Import/Export in the Amazon documentation. Procedure Create a file named trust-policy.json and include the following policy. Save the file on your system and note its location. Use the create role command to create the vmimport role. Specify the full path to the location of the trust-policy.json file. Prefix file:// to the path. For example: Create a file named role-policy.json and include the following policy. Replace s3-bucket-name with the name of your S3 bucket. Use the put-role-policy command to attach the policy to the role you created. Specify the full path of the role-policy.json file. For example: Additional resources VM Import Service Role Required Service Role 3.4.4. Converting and pushing your image to S3 By using the qemu-img command, you can convert your image, so that you can push it to S3. The samples are representative; they convert an image formatted in the qcow2 file format to raw format. Amazon accepts images in OVA , VHD , VHDX , VMDK , and raw formats. See How VM Import/Export Works for more information about image formats that Amazon accepts. Procedure Run the qemu-img command to convert your image. For example: Push the image to S3. Note This procedure could take a few minutes. After completion, you can check that your image uploaded successfully to your S3 bucket by using the AWS S3 Console . Additional resources How VM Import/Export Works AWS S3 Console 3.4.5. Importing your image as a snapshot To launch a RHEL instance in the Amazon Elastic Cloud Compute (EC2) service, you require an Amazon Machine Image (AMI). To create an AMI of your system, you must first upload a snapshot of your RHEL system image to EC2. Procedure Create a file to specify a bucket and path for your image. Name the file containers.json . In the sample that follows, replace s3-bucket-name with your bucket name and s3-key with your key. You can get the key for the image by using the Amazon S3 Console. Import the image as a snapshot. This example uses a public Amazon S3 file; you can use the Amazon S3 Console to change permissions settings on your bucket. The terminal displays a message such as the following. Note the ImportTaskID within the message. Track the progress of the import by using the describe-import-snapshot-tasks command. Include the ImportTaskID . The returned message shows the current status of the task. When complete, Status shows completed . Within the status, note the snapshot ID. Additional resources Amazon S3 Console Importing a Disk as a Snapshot Using VM Import/Export 3.4.6. Creating an AMI from the uploaded snapshot To launch a RHEL instance in Amazon Elastic Cloud Compute (EC2) service, you require an Amazon Machine Image (AMI). To create an AMI of your system, you can use a RHEL system snapshot that you previously uploaded. Procedure Go to the AWS EC2 Dashboard. Under Elastic Block Store , select Snapshots . Search for your snapshot ID (for example, snap-0e718930bd72bcda0 ). Right-click on the snapshot and select Create image . Name your image. Under Virtualization type , choose Hardware-assisted virtualization . Click Create . In the note regarding image creation, there is a link to your image. Click on the image link. Your image shows up under Images>AMIs . Note Alternatively, you can use the AWS CLI register-image command to create an AMI from a snapshot. See register-image for more information. An example follows. You must specify the root device volume /dev/sda1 as your root-device-name . For conceptual information about device mapping for AWS, see Example block device mapping . 3.4.7. Launching an instance from the AMI To launch and configure an Amazon Elastic Compute Cloud (EC2) instance, use an Amazon Machine Image (AMI). Procedure From the AWS EC2 Dashboard, select Images and then AMIs . Right-click on your image and select Launch . Choose an Instance Type that meets or exceeds the requirements of your workload. See Amazon EC2 Instance Types for information about instance types. Click : Configure Instance Details . Enter the Number of instances you want to create. For Network , select the VPC you created when setting up your AWS environment . Select a subnet for the instance or create a new subnet. Select Enable for Auto-assign Public IP. Note These are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements. Click : Add Storage . Verify that the default storage is sufficient. Click : Add Tags . Note Tags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information about tagging. Click : Configure Security Group . Select the security group you created when setting up your AWS environment . Click Review and Launch . Verify your selections. Click Launch . You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when setting up your AWS environment . Note Verify that the permissions for your private key are correct. Use the command options chmod 400 <keyname>.pem to change the permissions, if necessary. Click Launch Instances . Click View Instances . You can name the instance(s). You can now launch an SSH session to your instance(s) by selecting an instance and clicking Connect . Use the example provided for A standalone SSH client . Note Alternatively, you can launch an instance by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information. Additional resources AWS Management Console Setting Up with Amazon EC2 Amazon EC2 Instances Amazon EC2 Instance Types 3.4.8. Attaching Red Hat subscriptions Using the subscription-manager command, you can register and attach your Red Hat subscription to a RHEL instance. Prerequisites You must have enabled your subscriptions. Procedure Register your system. Attach your subscriptions. You can use an activation key to attach subscriptions. See Creating Red Hat Customer Portal Activation Keys for more information. Alternatively, you can manually attach a subscription by using the ID of the subscription pool (Pool ID). See Attaching a host-based subscription to hypervisors . Optional: To collect various system metrics about the instance in the Red Hat Hybrid Cloud Console , you can register the instance with Red Hat Insights . For information on further configuration of Red Hat Insights, see Client Configuration Guide for Red Hat Insights . Additional resources Creating Red Hat Customer Portal Activation Keys Attaching a host-based subscription to hypervisors Client Configuration Guide for Red Hat Insights 3.4.9. Setting up automatic registration on AWS Gold Images To make deploying RHEL 8 virtual machines on Amazon Web Services (AWS) faster and more comfortable, you can set up Gold Images of RHEL 8 to be automatically registered to the Red Hat Subscription Manager (RHSM). Prerequisites You have downloaded the latest RHEL 8 Gold Image for AWS. For instructions, see Using Gold Images on AWS . Note An AWS account can only be attached to a single Red Hat account at a time. Therefore, ensure no other users require access to the AWS account before attaching it to your Red Hat one. Procedure Upload the Gold Image to AWS. For instructions, see Uploading the Red Hat Enterprise Linux image to AWS . Create VMs by using the uploaded image. They will be automatically subscribed to RHSM. Verification In a RHEL 8 VM created using the above instructions, verify the system is registered to RHSM by executing the subscription-manager identity command. On a successfully registered system, this displays the UUID of the system. For example: Additional resources AWS Management Console Adding cloud integrations to the Hybrid Cloud Console 3.5. Additional resources Red Hat Cloud Access Reference Guide Red Hat in the Public Cloud Red Hat Enterprise Linux on Amazon EC2 - FAQs Setting Up with Amazon EC2 Red Hat on Amazon Web Services | [
"virt-install --name kvmtest --memory 2048 --vcpus 2 --cdrom /home/username/Downloads/rhel8.iso,bus=virtio --os-variant=rhel8.0",
"subscription-manager register --auto-attach",
"yum install cloud-init systemctl enable --now cloud-init.service",
"dracut -f --add-drivers \"nvme xen-netfront xen-blkfront\"",
"dracut -f --add-drivers \"nvme\"",
"yum install awscli",
"aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77",
"aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }",
"aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/sample/ImportService/trust-policy.json",
"{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::s3-bucket-name\", \"arn:aws:s3:::s3-bucket-name/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }",
"aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///home/sample/ImportService/role-policy.json",
"qemu-img convert -f qcow2 -O raw rhel-8.0-sample.qcow2 rhel-8.0-sample.raw",
"aws s3 cp rhel-8.0-sample.raw s3://s3-bucket-name",
"{ \"Description\": \"rhel-8.0-sample.raw\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"s3-bucket-name\", \"S3Key\": \"s3-key\" } }",
"aws ec2 import-snapshot --disk-container file://containers.json",
"{ \"SnapshotTaskDetail\": { \"Status\": \"active\", \"Format\": \"RAW\", \"DiskImageSize\": 0.0, \"UserBucket\": { \"S3Bucket\": \"s3-bucket-name\", \"S3Key\": \"rhel-8.0-sample.raw\" }, \"Progress\": \"3\", \"StatusMessage\": \"pending\" }, \"ImportTaskId\": \"import-snap-06cea01fa0f1166a8\" }",
"aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-06cea01fa0f1166a8",
"aws ec2 register-image --name \"myimagename\" --description \"myimagedescription\" --architecture x86_64 --virtualization-type hvm --root-device-name \"/dev/sda1\" --ena-support --block-device-mappings \"{\\\"DeviceName\\\": \\\"/dev/sda1\\\",\\\"Ebs\\\": {\\\"SnapshotId\\\": \\\"snap-0ce7f009b69ab274d\\\"}}\"",
"subscription-manager register --auto-attach",
"insights-client register --display-name <display-name-value>",
"subscription-manager identity system identity: fdc46662-c536-43fb-a18a-bbcb283102b7 name: 192.168.122.222 org name: 6340056 org ID: 6340056"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_rhel_8_on_amazon_web_services/assembly_deploying-a-virtual-machine-on-aws_cloud-content-AWS |
Chapter 47. Configuring network devices to accept traffic from all MAC addresses | Chapter 47. Configuring network devices to accept traffic from all MAC addresses Network devices usually intercept and read packets that their controller is programmed to receive. You can configure the network devices to accept traffic from all MAC addresses in a virtual switch or at the port group level. You can use this network mode to: Diagnose network connectivity issues Monitor network activity for security reasons Intercept private data-in-transit or intrusion in the network You can enable this mode for any kind of network device, except InfiniBand . 47.1. Temporarily configuring a device to accept all traffic You can use the ip utility to temporary configure a network device to accept all traffic regardless of the MAC addresses. Procedure Optional: Display the network interfaces to identify the one for which you want to receive all traffic: Modify the device to enable or disable this property: To enable the accept-all-mac-addresses mode for enp1s0 : To disable the accept-all-mac-addresses mode for enp1s0 : Verification Verify that the accept-all-mac-addresses mode is enabled: The PROMISC flag in the device description indicates that the mode is enabled. 47.2. Permanently configuring a network device to accept all traffic using nmcli You can use the nmcli utility to permanently configure a network device to accept all traffic regardless of the MAC addresses. Procedure Optional: Display the network interfaces to identify the one for which you want to receive all traffic: You can create a new connection, if you do not have any. Modify the network device to enable or disable this property. To enable the ethernet.accept-all-mac-addresses mode for enp1s0 : To disable the accept-all-mac-addresses mode for enp1s0 : Apply the changes, reactivate the connection: Verification Verify that the ethernet.accept-all-mac-addresses mode is enabled: The 802-3-ethernet.accept-all-mac-addresses: true indicates that the mode is enabled. 47.3. Permanently configuring a network device to accept all traffic using nmstatectl Use the nmstatectl utility to configure a device to accept all traffic regardless of the MAC addresses through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state. Prerequisites The nmstate package is installed. The enp1s0.yml file that you used to configure the device is available. Procedure Edit the existing enp1s0.yml file for the enp1s0 connection and add the following content to it: --- interfaces: - name: enp1s0 type: ethernet state: up accept -all-mac-address: true These settings configure the enp1s0 device to accept all traffic. Apply the network settings: Verification Verify that the 802-3-ethernet.accept-all-mac-addresses mode is enabled: The 802-3-ethernet.accept-all-mac-addresses: true indicates that the mode is enabled. Additional resources nmstatectl(8) man page on your system /usr/share/doc/nmstate/examples/ directory | [
"ip address show 1: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 98:fa:9b:a4:34:09 brd ff:ff:ff:ff:ff:ff",
"ip link set enp1s0 promisc on",
"ip link set enp1s0 promisc off",
"ip link show enp1s0 1: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST, PROMISC ,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000 link/ether 98:fa:9b:a4:34:09 brd ff:ff:ff:ff:ff:ff",
"ip address show 1: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 98:fa:9b:a4:34:09 brd ff:ff:ff:ff:ff:ff",
"nmcli connection modify enp1s0 ethernet.accept-all-mac-addresses yes",
"nmcli connection modify enp1s0 ethernet.accept-all-mac-addresses no",
"nmcli connection up enp1s0",
"nmcli connection show enp1s0 802-3-ethernet.accept-all-mac-addresses:1 (true)",
"--- interfaces: - name: enp1s0 type: ethernet state: up accept -all-mac-address: true",
"nmstatectl apply ~/enp1s0.yml",
"nmstatectl show enp1s0 interfaces: - name: enp1s0 type: ethernet state: up accept-all-mac-addresses: true"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/assembly_configuring-network-devices-to-accept-traffic-from-all-mac-addresses_configuring-and-managing-networking |
Chapter 12. Network Interface Bonding | Chapter 12. Network Interface Bonding This chapter defines some of the bonding options you can use in your custom network configuration. 12.1. Network Interface Bonding and Link Aggregation Control Protocol (LACP) You can bundle multiple physical NICs together to form a single logical channel known as a bond. Bonds can be configured to provide redundancy for high availability systems or increased throughput. Red Hat OpenStack Platform supports Linux bonds, Open vSwitch (OVS) kernel bonds, and OVS-DPDK bonds. The bonds can be used with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance. Red Hat recommends the use of Linux kernel bonds (bond type: linux_bond) over OvS kernel bonds (bond type: ovs_bond). User mode bonds (bond type: ovs_dpdk_bond) should be used with user mode bridges (type: ovs_user_bridge) as opposed to kernel mode bridges (type: ovs_bridge). However, don't combine ovs_bridge and ovs_user_bridge on the same node. On control and storage networks, Red Hat recommends the use of Linux bonds with VLAN and LACP, because OVS bonds carry the potential for control plane disruption that can occur when OVS or the neutron agent is restarted for updates, hot fixes, and other events. The Linux bond/LACP/VLAN configuration provides NIC management without the OVS disruption potential. Here is an example configuration of a Linux bond with one VLAN. The following example shows a Linux bond plugged into the OVS bridge The following example shows an OVS user space bridge: 12.2. Open vSwitch Bonding Options The Overcloud provides networking through Open vSwitch (OVS). The following table describes support for OVS kernel and OVS-DPDK for bonded interfaces. The OVS/OVS-DPDK balance-tcp mode is available as a technology preview only. Note This support requires Open vSwitch 2.11 or later. OVS Bond mode Application Notes Compatible LACP options active-backup High availability (active-passive) active, passive, or off balance-slb Increased throughput (active-active) Performance is affected by extra parsing per packet. There is a potential for vhost-user lock contention. active, passive, or off balance-tcp (tech preview only ) Not recommended (active-active) Recirculation needed for L4 hashing has a performance impact. As with balance-slb, performance is affected by extra parsing per packet and there is a potential for vhost-user lock contention. LACP must be enabled. active or passive You can configure a bonded interface in the network environment file using the BondInterfaceOvsOptions parameter as shown in this example: 12.3. Linux bonding options You can use LACP with Linux bonding in your network interface templates. For example: mode - enables LACP. lacp_rate - defines whether LACP packets are sent every 1 second, or every 30 seconds. updelay - defines the minimum amount of time that an interface must be active before it is used for traffic (this helps mitigate port flapping outages). miimon - the interval in milliseconds that is used for monitoring the port state using the driver's MIIMON functionality. 12.4. General bonding options The following table provides some explanation of these options and some alternatives depending on your hardware. Table 12.1. Bonding Options bond_mode=balance-slb Balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. Bonding with balance-slb allows a limited form of load balancing without the remote switch's knowledge or cooperation. SLB assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. This mode uses a simple hashing algorithm based on source MAC address and VLAN number, with periodic rebalancing as traffic patterns change. This mode is similar to mode 2 bonds used by the Linux bonding driver. This mode can be used to provide load balancing even when the switch is not configured to use LACP. bond_mode=active-backup This mode offers active/standby failover where the standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require any special switch support or configuration, and works when the links are connected to separate switches. This mode does not provide load balancing. lacp=[active|passive|off] Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use bond_mode=balance-slb or bond_mode=active-backup . other-config:lacp-fallback-ab=true Sets the LACP behavior to switch to bond_mode=active-backup as a fallback. other_config:lacp-time=[fast|slow] Set the LACP heartbeat to 1 second (fast) or 30 seconds (slow). The default is slow. other_config:bond-detect-mode=[miimon|carrier] Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier. other_config:bond-miimon-interval=100 If using miimon, set the heartbeat interval in milliseconds. other_config:bond_updelay=1000 Number of milliseconds a link must be up to be activated to prevent flapping. other_config:bond-rebalance-interval=10000 Milliseconds between rebalancing flows between bond members. Set to zero to disable. | [
"params: USDnetwork_config: network_config: - type: linux_bond name: bond_api bonding_options: \"mode=active-backup\" use_dhcp: false dns_servers: ` get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4 - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet",
"params: USDnetwork_config: network_config: - type: ovs_bridge name: br-tenant use_dhcp: false mtu: 9000 members: - type: linux_bond name: bond_tenant bonding_options: \"mode=802.3ad updelay=1000 miimon=100\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: p1p1 primary: true - type: interface name: p1p2 - type: vlan device: bond_tenant vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet}",
"params: USDnetwork_config: network_config: - type: ovs_user_bridge name: br-ex use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 2140 ovs_options: {get_param: BondInterfaceOvsOptions} #ovs_extra: #- set interface dpdk0 mtu_request=USDMTU #- set interface dpdk1 mtu_request=USDMTU rx_queue: get_param: NumDpdkInterfaceRxQueues members: - type: ovs_dpdk_port name: dpdk0 mtu: 2140 members: - type: interface name: p1p1 - type: ovs_dpdk_port name: dpdk1 mtu: 2140 members: - type: interface name: p1p2",
"parameter_defaults: BondInterfaceOvsOptions: \"bond_mode=balance-slb\"",
"- type: linux_bond name: bond1 members: - type: interface name: nic2 - type: interface name: nic3 bonding_options: \"mode=802.3ad lacp_rate=[fast|slow] updelay=1000 miimon=100\""
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/overcloud-network-interface-bonding |
Deploying OpenShift Data Foundation using IBM Power | Deploying OpenShift Data Foundation using IBM Power Red Hat OpenShift Data Foundation 4.15 Instructions on deploying Red Hat OpenShift Data Foundation on IBM Power Red Hat Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_power/index |
6.15. Using Different Applets for Different SCP Versions | 6.15. Using Different Applets for Different SCP Versions In Certificate System, the following parameter in the /var/lib/ instance_name /tps/conf/CS.cfg file specifies which applet should be loaded for all Secure Channel Protocol (SCP) versions for each token operation: However, you can also set individual applets for specific SCP versions, by adding the following parameter: Certificate System supports setting individual protocol versions for the following operations: format enroll pinReset Example 6.3. Setting Protocol Versions for Enrollment Operations To configure a specific applet for SCP03 and a different applet for all other protocols when performing enrollment operations for the userKey token: Edit the /var/lib/ instance_name /tps/conf/CS.cfg file: Set the op.enroll.userKey.update.applet.requiredVersion parameter to specify the applet used by default. For example: Set the op.enroll.userKey.update.applet.requiredVersion.prot.3 parameter to configure the applet Certificate System uses for the SCP03 protocol. For example: Restart Certificate System: For details about enabling SCP03 for Giesecke & Devrient (G&D) Smart Cafe 6 smart cards in a TKS, see Section 6.12, "Setting Up New Key Sets" . | [
"op. operation . token_type .update.applet.requiredVersion= version",
"op. operation . token_type .update.applet.requiredVersion.prot. protocol_version = version",
"op.enroll.userKey.update.applet.requiredVersion=1.4.58768072",
"op.enroll.userKey.update.applet.requiredVersion.prot. 3 =1.5.558cdcff",
"pki-server restart instance_name"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/using_different_applets_for_different_scp_versions |
Chapter 6. GenericKafkaListener schema reference | Chapter 6. GenericKafkaListener schema reference Used in: KafkaClusterSpec Full list of GenericKafkaListener schema properties Configures listeners to connect to Kafka brokers within and outside OpenShift. Configure Kafka broker listeners using the listeners property in the Kafka resource. Listeners are defined as an array. Example Kafka resource showing listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: #... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... The name and port must be unique within the Kafka cluster. By specifying a unique name and port for each listener, you can configure multiple listeners. The name can be up to 25 characters long, comprising lower-case letters and numbers. 6.1. Specifying a port number The port number is the port used in the Kafka cluster, which might not be the same port used for access by a client. loadbalancer listeners use the specified port number, as do internal and cluster-ip listeners ingress and route listeners use port 443 for access nodeport listeners use the port number assigned by OpenShift For client connection, use the address and port for the bootstrap service of the listener. You can retrieve this from the status of the Kafka resource. Example command to retrieve the address and port for client connection oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name=="<listener_name>")].bootstrapServers}{"\n"}' Important When configuring listeners for client access to brokers, you can use port 9092 or higher (9093, 9094, and so on), but with a few exceptions. The listeners cannot be configured to use the ports reserved for interbroker communication (9090 and 9091), Prometheus metrics (9404), and JMX (Java Management Extensions) monitoring (9999). 6.2. Specifying listener types Set the type to internal for internal listeners. For external listeners, choose from route , loadbalancer , nodeport , or ingress . You can also configure a cluster-ip listener, which is an internal type used for building custom access mechanisms. internal You can configure internal listeners with or without encryption using the tls property. Example internal listener configuration #... spec: kafka: #... listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #... route Configures an external listener to expose Kafka using OpenShift Routes and the HAProxy router. A dedicated Route is created for every Kafka broker pod. An additional Route is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes to connect to Kafka on port 443. The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example. Example route listener configuration #... spec: kafka: #... listeners: #... - name: external1 port: 9094 type: route tls: true #... ingress Configures an external listener to expose Kafka using Kubernetes Ingress and the Ingress NGINX Controller for Kubernetes . A dedicated Ingress resource is created for every Kafka broker pod. An additional Ingress resource is created to serve as a Kafka bootstrap address. Kafka clients can use these Ingress resources to connect to Kafka on port 443. The client connects on port 443, the default controller port, but traffic is then routed to the port you configure, which is 9095 in the following example. You must specify the hostname used by the bootstrap service using GenericKafkaListenerConfigurationBootstrap property. And you must also specify the hostnames used by the per-broker services using GenericKafkaListenerConfigurationBroker or hostTemplate properties. With the hostTemplate property, you don't need to specify the configuration for every broker. Example ingress listener configuration #... spec: kafka: #... listeners: #... - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: hostTemplate: broker-{nodeId}.myingress.com bootstrap: host: bootstrap.myingress.com #... Note External listeners using Ingress are currently only tested with the Ingress NGINX Controller for Kubernetes . loadbalancer Configures an external listener to expose Kafka using a Loadbalancer type Service . A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to the specified port number, which is port 9094 in the following example. You can use the loadBalancerSourceRanges property to configure source ranges to restrict access to the specified IP addresses. Example loadbalancer listener configuration #... spec: kafka: #... listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #... nodeport Configures an external listener to expose Kafka using a NodePort type Service . Kafka clients connect directly to the nodes of OpenShift. An additional NodePort type of service is created to serve as a Kafka bootstrap address. When configuring the advertised addresses for the Kafka broker pods, Streams for Apache Kafka uses the address of the node on which the given pod is running. You can use preferredNodePortAddressType property to configure the first address type checked as the node address . Example nodeport listener configuration #... spec: kafka: #... listeners: #... - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #... Note TLS hostname verification is not currently supported when exposing Kafka clusters using node ports. cluster-ip Configures an internal listener to expose Kafka using a per-broker ClusterIP type Service . The listener does not use a headless service and its DNS names to route traffic to Kafka brokers. You can use this type of listener to expose a Kafka cluster when using the headless service is unsuitable. You might use it with a custom access mechanism, such as one that uses a specific Ingress controller or the OpenShift Gateway API. A new ClusterIP service is created for each Kafka broker pod. The service is assigned a ClusterIP address to serve as a Kafka bootstrap address with a per-broker port number. For example, you can configure the listener to expose a Kafka cluster over an Nginx Ingress Controller with TCP port configuration. Example cluster-ip listener configuration #... spec: kafka: #... listeners: - name: clusterip type: cluster-ip tls: false port: 9096 #... 6.3. Configuring network policies to restrict listener access Use networkPolicyPeers to configure network policies that restrict access to a listener at the network level. The following example shows a networkPolicyPeers configuration for a plain and a tls listener. In the following example: Only application pods matching the labels app: kafka-sasl-consumer and app: kafka-sasl-producer can connect to the plain listener. The application pods must be running in the same namespace as the Kafka broker. Only application pods running in namespaces matching the labels project: myproject and project: myproject2 can connect to the tls listener. The syntax of the networkPolicyPeers property is the same as the from property in NetworkPolicy resources. Example network policy configuration listeners: #... - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2 # ... 6.4. GenericKafkaListener schema properties Property Property type Description name string Name of the listener. The name will be used to identify the listener and the related OpenShift objects. The name has to be unique within given a Kafka cluster. The name can consist of lowercase characters and numbers and be up to 11 characters long. port integer Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. type string (one of [ingress, internal, route, loadbalancer, cluster-ip, nodeport]) Type of the listener. The supported types are as follows: internal type exposes Kafka internally only within the OpenShift cluster. route type uses OpenShift Routes to expose Kafka. loadbalancer type uses LoadBalancer type services to expose Kafka. nodeport type uses NodePort type services to expose Kafka. ingress type uses OpenShift Nginx Ingress to expose Kafka with TLS passthrough. cluster-ip type uses a per-broker ClusterIP service. tls boolean Enables TLS encryption on the listener. This is a required property. For route and ingress type listeners, TLS encryption must be always enabled. authentication KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth , KafkaListenerAuthenticationCustom Authentication configuration for this listener. configuration GenericKafkaListenerConfiguration Additional listener configuration. networkPolicyPeers NetworkPolicyPeer array List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #",
"get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\"<listener_name>\")].bootstrapServers}{\"\\n\"}'",
"# spec: kafka: # listeners: # - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #",
"# spec: kafka: # listeners: # - name: external1 port: 9094 type: route tls: true #",
"# spec: kafka: # listeners: # - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: hostTemplate: broker-{nodeId}.myingress.com bootstrap: host: bootstrap.myingress.com #",
"# spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #",
"# spec: kafka: # listeners: # - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #",
"# spec: kafka: # listeners: - name: clusterip type: cluster-ip tls: false port: 9096 #",
"listeners: # - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-GenericKafkaListener-reference |
Chapter 6. Installation configuration parameters for Azure Stack Hub | Chapter 6. Installation configuration parameters for Azure Stack Hub Before you deploy an OpenShift Container Platform cluster on Azure Stack Hub, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. 6.1. Available installation configuration parameters for Azure Stack Hub The following tables specify the required, optional, and Azure Stack Hub-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 6.1.4. Additional Azure Stack Hub configuration parameters Additional Azure configuration parameters are described in the following table: Table 6.4. Additional Azure Stack Hub parameters Parameter Description Values The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . Defines the azure instance type for compute machines. String The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . Defines the type of disk. premium_LRS . Defines the azure instance type for control plane machines. String The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . The Azure instance type for control plane and compute machines. The Azure instance type. The URL of the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. String The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . The name of your Azure Stack Hub local region. String The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. AzureStackCloud The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. String, for example, https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: azure: osDisk: diskSizeGB:",
"compute: platform: azure: osDisk: diskType:",
"compute: platform: azure: type:",
"controlPlane: platform: azure: osDisk: diskSizeGB:",
"controlPlane: platform: azure: osDisk: diskType:",
"controlPlane: platform: azure: type:",
"platform: azure: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: azure: defaultMachinePlatform: osDisk: diskType:",
"platform: azure: defaultMachinePlatform: type:",
"platform: azure: armEndpoint:",
"platform: azure: baseDomainResourceGroupName:",
"platform: azure: region:",
"platform: azure: resourceGroupName:",
"platform: azure: outboundType:",
"platform: azure: cloudName:",
"clusterOSImage:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure_stack_hub/installation-config-parameters-ash |
6.2.2. Where Users Access Shared Data | 6.2.2. Where Users Access Shared Data When sharing data among users, it is a common practice to have a central server (or group of servers) that make certain directories available to other machines on the network. This way data is stored in one place; synchronizing data between multiple machines is not necessary. Before taking this approach, you must first determine what systems are to access the centrally-stored data. As you do this, take note of the operating systems used by the systems. This information has a bearing on your ability to implement such an approach, as your storage server must be capable of serving its data to each of the operating systems in use at your organization. Unfortunately, once data is shared between multiple computers on a network, the potential for conflicts in file ownership can arise. 6.2.2.1. Global Ownership Issues There are benefits if data is stored centrally and is accessed by multiple computers over a network. However, assume for a moment that each of those computers has a locally-maintained list of user accounts. What if the list of users on each of these systems are not consistent with the list of users on the central server? Even worse, what if the list of users on each of these systems are not even consistent with each other? Much of this depends on how users and access permissions are implemented on each system, but in some cases it is possible that user A on one system may actually be known as user B on another system. This becomes a real problem when data is shared between these systems, as data that user A is allowed to access from one system can also be read by user B from another system. For this reason, many organizations use some sort of central user database. This assures that there are no overlaps between user lists on different systems. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-acctsgrps-res-where |
11.2. Interface Configuration Files | 11.2. Interface Configuration Files Interface configuration files control the software interfaces for individual network devices. As the system boots, it uses these files to determine what interfaces to bring up and how to configure them. These files are usually named ifcfg- name , where name refers to the name of the device that the configuration file controls. 11.2.1. Ethernet Interfaces One of the most common interface files is /etc/sysconfig/network-scripts/ifcfg-eth0 , which controls the first Ethernet network interface card or NIC in the system. In a system with multiple NICs, there are multiple ifcfg-eth X files (where X is a unique number corresponding to a specific interface). Because each device has its own configuration file, an administrator can control how each interface functions individually. The following is a sample ifcfg-eth0 file for a system using a fixed IP address: The values required in an interface configuration file can change based on other values. For example, the ifcfg-eth0 file for an interface using DHCP looks different because IP information is provided by the DHCP server: NetworkManager is graphical configuration tool which provides an easy way to make changes to the various network interface configuration files (see Chapter 10, NetworkManager for detailed instructions on using this tool). However, it is also possible to manually edit the configuration files for a given network interface. Below is a listing of the configurable parameters in an Ethernet interface configuration file: BONDING_OPTS = parameters sets the configuration parameters for the bonding device, and is used in /etc/sysconfig/network-scripts/ifcfg-bond N (see Section 11.2.4, "Channel Bonding Interfaces" ). These parameters are identical to those used for bonding devices in /sys/class/net/ bonding_device /bonding , and the module parameters for the bonding driver as described in bonding Module Directives . This configuration method is used so that multiple bonding devices can have different configurations. In Red Hat Enterprise Linux 6, place all interface-specific bonding options after the BONDING_OPTS directive in ifcfg- name files. See Where to specify bonding module parameters for more information. BOOTPROTO = protocol where protocol is one of the following: none - No boot-time protocol should be used. bootp - The BOOTP protocol should be used. dhcp - The DHCP protocol should be used. BROADCAST = address where address is the broadcast address. This directive is deprecated, as the value is calculated automatically with ipcalc . DEVICE = name where name is the name of the physical device (except for dynamically-allocated PPP devices where it is the logical name ). DHCP_HOSTNAME = name where name is a short host name to be sent to the DHCP server. Use this option only if the DHCP server requires the client to specify a host name before receiving an IP address. DHCPV6C = answer where answer is one of the following: yes - Use DHCP to obtain an IPv6 address for this interface. no - Do not use DHCP to obtain an IPv6 address for this interface. This is the default value. An IPv6 link-local address will still be assigned by default. The link-local address is based on the MAC address of the interface as per RFC 4862 . DHCPV6C_OPTIONS = answer where answer is one of the following: -P - Enable IPv6 prefix delegation. -S - Use DHCP to obtain stateless configuration only, not addresses, for this interface. -N - Restore normal operation after using the -T or -P options. -T - Use DHCP to obtain a temporary IPv6 address for this interface. -D - Override the default when selecting the type of DHCP Unique Identifier ( DUID ) to use. By default, the DHCP client (dhclient) creates a DHCP Unique Identifier ( DUID ) based on the link-layer address (DUID-LL) if it is running in stateless mode (with the -S option, to not request an address), or it creates an identifier based on the link-layer address plus a timestamp (DUID-LLT) if it is running in stateful mode (without -S , requesting an address). The -D option overrides this default, with a value of either LL or LLT . DNS {1,2} = address where address is a name server address to be placed in /etc/resolv.conf provided that the PEERDNS directive is not set to no . ETHTOOL_OPTS = options where options are any device-specific options supported by ethtool . For example, if you wanted to force 100Mb, full duplex: Instead of a custom initscript, use ETHTOOL_OPTS to set the interface speed and duplex settings. Custom initscripts run outside of the network init script lead to unpredictable results during a post-boot network service restart. Important Changing speed or duplex settings almost always requires disabling auto-negotiation with the autoneg off option. This option needs to be stated first, as the option entries are order-dependent. See Section 11.8, "Ethtool" for more ethtool options. HOTPLUG = answer where answer is one of the following: yes - This device should be activated when it is hot-plugged (this is the default option). no - This device should not be activated when it is hot-plugged. The HOTPLUG=no option can be used to prevent a channel bonding interface from being activated when a bonding kernel module is loaded. See Section 11.2.4, "Channel Bonding Interfaces" for more information about channel bonding interfaces. HWADDR = MAC-address where MAC-address is the hardware address of the Ethernet device in the form AA:BB:CC:DD:EE:FF . This directive must be used in machines containing more than one NIC to ensure that the interfaces are assigned the correct device names regardless of the configured load order for each NIC's module. This directive should not be used in conjunction with MACADDR . Note Persistent device names are now handled by /etc/udev/rules.d/70-persistent-net.rules . HWADDR must not be used with System z network devices. See Section 25.3.3, "Mapping subchannels and network device names", in the Red Hat Enterprise Linux 6 Installation Guide . IPADDR n = address where address is the IPv4 address and the n is expected to be consecutive positive integers starting from 0 (for example, IPADDR0). It is used for configurations with multiple IP addresses on an interface. It can be omitted if there is only one address being configured. IPV6ADDR = address where address is the first static, or primary, IPv6 address on an interface. The format is Address/Prefix-length. If no prefix length is specified, /64 is assumed. Note that this setting depends on IPV6INIT being enabled. IPV6ADDR_SECONDARIES = address where address is one or more, space separated, additional IPv6 addresses. The format is Address/Prefix-length. If no prefix length is specified, /64 is assumed. Note that this setting depends on IPV6INIT being enabled. IPV6INIT = answer where answer is one of the following: yes - Initialize this interface for IPv6 addressing. no - Do not initialize this interface for IPv6 addressing. This is the default value. This setting is required for IPv6 static and DHCP assignment of IPv6 addresses. It does not affect IPv6 Stateless Address Autoconfiguration ( SLAAC ) as per RFC 4862 . See Section D.1.14, "/etc/sysconfig/network" for information on disabling IPv6 . IPV6_AUTOCONF = answer where answer is one of the following: yes - Enable IPv6 autoconf configuration for this interface. no - Disable IPv6 autoconf configuration for this interface. If enabled, an IPv6 address will be requested using Neighbor Discovery ( ND ) from a router running the radvd daemon. Note that the default value of IPV6_AUTOCONF depends on IPV6FORWARDING as follows: If IPV6FORWARDING = yes , then IPV6_AUTOCONF will default to no . If IPV6FORWARDING = no , then IPV6_AUTOCONF will default to yes and IPV6_ROUTER has no effect. IPV6_MTU = value where value is an optional dedicated MTU for this interface. IPV6_PRIVACY = rfc3041 where rfc3041 optionally sets this interface to support RFC 3041 Privacy Extensions for Stateless Address Autoconfiguration in IPv6 . Note that this setting depends on IPV6INIT option being enabled. The default is for RFC 3041 support to be disabled. Stateless Autoconfiguration will derive addresses based on the MAC address, when available, using the modified EUI-64 method. The address is appended to a prefix but as the address is normally derived from the MAC address it is globally unique even when the prefix changes. In the case of a link-local address the prefix is fe80::/64 as per RFC 2462 IPv6 Stateless Address Autoconfiguration . LINKDELAY = time where time is the number of seconds to wait for link negotiation before configuring the device. The default is 5 secs. Delays in link negotiation, caused by STP for example, can be overcome by increasing this value. MACADDR = MAC-address where MAC-address is the hardware address of the Ethernet device in the form AA:BB:CC:DD:EE:FF . This directive is used to assign a MAC address to an interface, overriding the one assigned to the physical NIC. This directive should not be used in conjunction with the HWADDR directive. MASTER = bond-interface where bond-interface is the channel bonding interface to which the Ethernet interface is linked. This directive is used in conjunction with the SLAVE directive. See Section 11.2.4, "Channel Bonding Interfaces" for more information about channel bonding interfaces. NETMASK n = mask where mask is the netmask value and the n is expected to be consecutive positive integers starting from 0 (for example, NETMASK0). It is used for configurations with multiple IP addresses on an interface. It can be omitted if there is only one address being configured. NETWORK = address where address is the network address. This directive is deprecated, as the value is calculated automatically with ipcalc . NM_CONTROLLED = answer where answer is one of the following: yes - NetworkManager is permitted to configure this device. This is the default behavior and can be omitted. no - NetworkManager is not permitted to configure this device. Note The NM_CONTROLLED directive is now, as of Red Hat Enterprise Linux 6.3, dependent on the NM_BOND_VLAN_ENABLED directive in /etc/sysconfig/network . If and only if that directive is present and is one of yes , y , or true , will NetworkManager detect and manage bonding and VLAN interfaces. ONBOOT = answer where answer is one of the following: yes - This device should be activated at boot-time. no - This device should not be activated at boot-time. PEERDNS = answer where answer is one of the following: yes - Modify /etc/resolv.conf if the DNS directive is set, if using DHCP , or if using Microsoft's RFC 1877 IPCP extensions with PPP . In all cases yes is the default. no - Do not modify /etc/resolv.conf . SLAVE = answer where answer is one of the following: yes - This device is controlled by the channel bonding interface specified in the MASTER directive. no - This device is not controlled by the channel bonding interface specified in the MASTER directive. This directive is used in conjunction with the MASTER directive. See Section 11.2.4, "Channel Bonding Interfaces" for more about channel bonding interfaces. SRCADDR = address where address is the specified source IP address for outgoing packets. USERCTL = answer where answer is one of the following: yes - Non- root users are allowed to control this device. no - Non- root users are not allowed to control this device. | [
"DEVICE=eth0 BOOTPROTO=none ONBOOT=yes NETMASK=255.255.255.0 IPADDR=10.0.1.27 USERCTL=no",
"DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes",
"ETHTOOL_OPTS=\"autoneg off speed 100 duplex full\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-networkscripts-interfaces |
4.17. Hewlett-Packard iLO MP | 4.17. Hewlett-Packard iLO MP Table 4.18, "HP iLO (Integrated Lights Out) MP" lists the fence device parameters used by fence_ilo_mp , the fence agent for HP iLO MP devices. Table 4.18. HP iLO (Integrated Lights Out) MP luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport TCP port to use for connection with the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The Identity file for SSH. Force Command Prompt cmd_prompt The command prompt to use. The default value is 'MP>', 'hpiLO->'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Figure 4.13, "HP iLO MP" shows the configuration screen for adding an HP iLO MPfence device. Figure 4.13. HP iLO MP The following command creates a fence device instance for a HP iLO MP device: The following is the cluster.conf entry for the fence_hpilo_mp device: | [
"ccs -f cluster.conf --addfencedev hpilomptest1 agent=fence_hpilo cmd_prompt=hpiLO-> ipaddr=192.168.0.1 login=root passwd=password123 power_wait=60",
"<fencedevices> <fencedevice agent=\"fence_ilo_mp\" cmd_prompt=\"hpiLO->\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"hpilomptest1\" passwd=\"password123\" power_wait=\"60\"/> </fencedevices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-hpilo-mp-ca |
Chapter 4. Installing director | Chapter 4. Installing director 4.1. Configuring director The director installation process requires certain settings in the undercloud.conf configuration file, which director reads from the home directory of the stack user. Complete the following steps to copy default template as a foundation for your configuration. Procedure Copy the default template to the home directory of the stack user's: Edit the undercloud.conf file. This file contains settings to configure your undercloud. If you omit or comment out a parameter, the undercloud installation uses the default value. 4.2. Director configuration parameters The following list contains information about parameters for configuring the undercloud.conf file. Keep all parameters within their relevant sections to avoid errors. Defaults The following parameters are defined in the [DEFAULT] section of the undercloud.conf file: additional_architectures A list of additional (kernel) architectures that an overcloud supports. Currently the overcloud supports ppc64le architecture. Note When you enable support for ppc64le, you must also set ipxe_enabled to False certificate_generation_ca The certmonger nickname of the CA that signs the requested certificate. Use this option only if you have set the generate_service_certificate parameter. If you select the local CA, certmonger extracts the local CA certificate to /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and adds the certificate to the trust chain. clean_nodes Defines whether to wipe the hard drive between deployments and after introspection. cleanup Cleanup temporary files. Set this to False to leave the temporary files used during deployment in place after you run the deployment command. This is useful for debugging the generated files or if errors occur. container_cli The CLI tool for container management. Leave this parameter set to podman . Red Hat Enterprise Linux 8.1 only supports podman . container_healthcheck_disabled Disables containerized service health checks. Red Hat recommends that you enable health checks and leave this option set to false . container_images_file Heat environment file with container image information. This file can contain the following entries: Parameters for all required container images The ContainerImagePrepare parameter to drive the required image preparation. Usually the file that contains this parameter is named containers-prepare-parameter.yaml . container_insecure_registries A list of insecure registries for podman to use. Use this parameter if you want to pull images from another source, such as a private container registry. In most cases, podman has the certificates to pull container images from either the Red Hat Container Catalog or from your Satellite server if the undercloud is registered to Satellite. container_registry_mirror An optional registry-mirror configured that podman uses. custom_env_files Additional environment files that you want to add to the undercloud installation. deployment_user The user who installs the undercloud. Leave this parameter unset to use the current default user stack . discovery_default_driver Sets the default driver for automatically enrolled nodes. Requires the enable_node_discovery parameter to be enabled and you must include the driver in the enabled_hardware_types list. enable_ironic; enable_ironic_inspector; enable_mistral; enable_nova; enable_tempest; enable_validations; enable_zaqar Defines the core services that you want to enable for director. Leave these parameters set to true . enable_node_discovery Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use the fake_pxe driver as a default but you can set discovery_default_driver to override. You can also use introspection rules to specify driver information for newly enrolled nodes. enable_novajoin Defines whether to install the novajoin metadata service in the undercloud. enable_routed_networks Defines whether to enable support for routed control plane networks. enable_swift_encryption Defines whether to enable Swift encryption at-rest. enable_telemetry Defines whether to install OpenStack Telemetry services (gnocchi, aodh, panko) in the undercloud. Set the enable_telemetry parameter to true if you want to install and configure telemetry services automatically. The default value is false , which disables telemetry on the undercloud. This parameter is required if you use other products that consume metrics data, such as Red Hat CloudForms. enabled_hardware_types A list of hardware types that you want to enable for the undercloud. generate_service_certificate Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used for the undercloud_service_certificate parameter. The undercloud installation saves the resulting certificate /etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem . The CA defined in the certificate_generation_ca parameter signs this certificate. heat_container_image URL for the heat container image to use. Leave unset. heat_native Run host-based undercloud configuration using heat-all . Leave as true . hieradata_override Path to hieradata override file that configures Puppet hieradata on the director, providing custom configuration to services beyond the undercloud.conf parameters. If set, the undercloud installation copies this file to the /etc/puppet/hieradata directory and sets it as the first file in the hierarchy. For more information about using this feature, see Configuring hieradata on the undercloud . inspection_extras Defines whether to enable extra hardware collection during the inspection process. This parameter requires the python-hardware or python-hardware-detect packages on the introspection image. inspection_interface The bridge that director uses for node introspection. This is a custom bridge that the director configuration creates. The LOCAL_INTERFACE attaches to this bridge. Leave this as the default br-ctlplane . inspection_runbench Runs a set of benchmarks during node introspection. Set this parameter to true to enable the benchmarks. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes. ipa_otp Defines the one-time password to register the undercloud node to an IPA server. This is required when enable_novajoin is enabled. ipv6_address_mode IPv6 address configuration mode for the undercloud provisioning network. The following list contains the possible values for this parameter: dhcpv6-stateless - Address configuration using router advertisement (RA) and optional information using DHCPv6. dhcpv6-stateful - Address configuration and optional information using DHCPv6. ipxe_enabled Defines whether to use iPXE or standard PXE. The default is true , which enables iPXE. Set this parameter to false to use standard PXE. local_interface The chosen interface for the director Provisioning NIC. This is also the device that director uses for DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the ip addr command. For example, this is the result of an ip addr command: In this example, the External NIC uses em0 and the Provisioning NIC uses em1 , which is currently not configured. In this case, set the local_interface to em1 . The configuration script attaches this interface to a custom bridge defined with the inspection_interface parameter. local_ip The IP address defined for the director Provisioning NIC. This is also the IP address that director uses for DHCP and PXE boot services. Leave this value as the default 192.168.24.1/24 unless you use a different subnet for the Provisioning network, for example, if this IP address conflicts with an existing IP address or subnet in your environment. local_mtu The maximum transmission unit (MTU) that you want to use for the local_interface . Do not exceed 1500 for the undercloud. local_subnet The local subnet that you want to use for PXE boot and DHCP interfaces. The local_ip address should reside in this subnet. The default is ctlplane-subnet . net_config_override Path to network configuration override template. If you set this parameter, the undercloud uses a JSON format template to configure the networking with os-net-config and ignores the network parameters set in undercloud.conf . Use this parameter when you want to configure bonding or add an option to the interface. See /usr/share/python-tripleoclient/undercloud.conf.sample for an example. networks_file Networks file to override for heat . output_dir Directory to output state, processed heat templates, and Ansible deployment files. overcloud_domain_name The DNS domain name that you want to use when you deploy the overcloud. Note When you configure the overcloud, you must set the CloudDomain parameter to a matching value. Set this parameter in an environment file when you configure your overcloud. roles_file The roles file that you want to use to override the default roles file for undercloud installation. It is highly recommended to leave this parameter unset so that the director installation uses the default roles file. scheduler_max_attempts The maximum number of times that the scheduler attempts to deploy an instance. This value must be greater or equal to the number of bare metal nodes that you expect to deploy at once to avoid potential race conditions when scheduling. service_principal The Kerberos principal for the service using the certificate. Use this parameter only if your CA requires a Kerberos principal, such as in FreeIPA. subnets List of routed network subnets for provisioning and introspection. The default value includes only the ctlplane-subnet subnet. For more information, see Subnets . templates Heat templates file to override. undercloud_admin_host The IP address or hostname defined for director Admin API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the /32 netmask. undercloud_debug Sets the log level of undercloud services to DEBUG . Set this value to true to enable DEBUG log level. undercloud_enable_selinux Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to true unless you are debugging an issue. undercloud_hostname Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures all system host name settings. If left unset, the undercloud uses the current host name, but you must configure all system host name settings appropriately. undercloud_log_file The path to a log file to store the undercloud install and upgrade logs. By default, the log file is install-undercloud.log in the home directory. For example, /home/stack/install-undercloud.log . undercloud_nameservers A list of DNS nameservers to use for the undercloud hostname resolution. undercloud_ntp_servers A list of network time protocol servers to help synchronize the undercloud date and time. undercloud_public_host The IP address or hostname defined for director Public API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the /32 netmask. undercloud_service_certificate The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed certificate. undercloud_timezone Host timezone for the undercloud. If you do not specify a timezone, director uses the existing timezone configuration. undercloud_update_packages Defines whether to update packages during the undercloud installation. Subnets Each provisioning subnet is a named section in the undercloud.conf file. For example, to create a subnet called ctlplane-subnet , use the following sample in your undercloud.conf file: You can specify as many provisioning networks as necessary to suit your environment. cidr The network that director uses to manage overcloud instances. This is the Provisioning network, which the undercloud neutron service manages. Leave this as the default 192.168.24.0/24 unless you use a different subnet for the Provisioning network. masquerade Defines whether to masquerade the network defined in the cidr for external access. This provides the Provisioning network with a degree of network address translation (NAT) so that the Provisioning network has external access through director. Note The director configuration also enables IP forwarding automatically using the relevant sysctl kernel parameter. dhcp_start; dhcp_end The start and end of the DHCP allocation range for overcloud nodes. Ensure that this range contains enough IP addresses to allocate your nodes. dhcp_exclude IP addresses to exclude in the DHCP allocation range. dns_nameservers DNS nameservers specific to the subnet. If no nameservers are defined for the subnet, the subnet uses nameservers defined in the undercloud_nameservers parameter. gateway The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network. Leave this as the default 192.168.24.1 unless you use a different IP address for director or want to use an external gateway directly. host_routes Host routes for the Neutron-managed subnet for the overcloud instances on this network. This also configures the host routes for the local_subnet on the undercloud. inspection_iprange Temporary IP range for nodes on this network to use during the inspection process. This range must not overlap with the range defined by dhcp_start and dhcp_end but must be in the same IP subnet. Modify the values of these parameters to suit your configuration. When complete, save the file. 4.3. Configuring the undercloud with environment files You configure the main parameters for the undercloud through the undercloud.conf file. You can also perform additional undercloud configuration with an environment file that contains heat parameters. Procedure Create an environment file named /home/stack/templates/custom-undercloud-params.yaml . Edit this file and include your heat parameters. For example, to enable debugging for certain OpenStack Platform services include the following snippet in the custom-undercloud-params.yaml file: Save this file when you have finished. Edit your undercloud.conf file and scroll to the custom_env_files parameter. Edit the parameter to point to your custom-undercloud-params.yaml environment file: Note You can specify multiple environment files using a comma-separated list. The director installation includes this environment file during the undercloud installation or upgrade operation. 4.4. Common heat parameters for undercloud configuration The following table contains some common heat parameters that you might set in a custom environment file for your undercloud. Parameter Description AdminPassword Sets the undercloud admin user password. AdminEmail Sets the undercloud admin user email address. Debug Enables debug mode. Set these parameters in your custom environment file under the parameter_defaults section: 4.5. Configuring hieradata on the undercloud You can provide custom configuration for services beyond the available undercloud.conf parameters by configuring Puppet hieradata on the director. Procedure Create a hieradata override file, for example, /home/stack/hieradata.yaml . Add the customized hieradata to the file. For example, add the following snippet to modify the Compute (nova) service parameter force_raw_images from the default value of True to False : If there is no Puppet implementation for the parameter you want to set, then use the following method to configure the parameter: For example: Set the hieradata_override parameter in the undercloud.conf file to the path of the new /home/stack/hieradata.yaml file: 4.6. Configuring the undercloud for bare metal provisioning over IPv6 Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . If you have IPv6 nodes and infrastructure, you can configure the undercloud and the provisioning network to use IPv6 instead of IPv4 so that director can provision and deploy Red Hat OpenStack Platform onto IPv6 nodes. However, there are some considerations: Stateful DHCPv6 is available only with a limited set of UEFI firmware. For more information, see Bugzilla #1575026 . Dual stack IPv4/6 is not available. Tempest validations might not perform correctly. IPv4 to IPv6 migration is not available during upgrades. Modify the undercloud.conf file to enable IPv6 provisioning in Red Hat OpenStack Platform. Prerequisites An IPv6 address on the undercloud. For more information, see Configuring an IPv6 address on the undercloud in the IPv6 Networking for the Overcloud guide. Procedure Copy the sample undercloud.conf file, or modify your existing undercloud.conf file. Set the following parameter values in the undercloud.conf file: Set ipv6_address_mode to dhcpv6-stateless or dhcpv6-stateful if your NIC supports stateful DHCPv6 with Red Hat OpenStack Platform. For more information about stateful DHCPv6 availability, see Bugzilla #1575026 . Set enable_routed_networks to true if you do not want the undercloud to create a router on the provisioning network. In this case, the data center router must provide router advertisements. Otherwise, set this value to false . Set local_ip to the IPv6 address of the undercloud. Use IPv6 addressing for the undercloud interface parameters undercloud_public_host and undercloud_admin_host . In the [ctlplane-subnet] section, use IPv6 addressing in the following parameters: cidr dhcp_start dhcp_end gateway inspection_iprange In the [ctlplane-subnet] section, set an IPv6 nameserver for the subnet in the dns_nameservers parameter. 4.7. Installing director Complete the following steps to install director and perform some basic post-installation tasks. Procedure Run the following command to install director on the undercloud: This command launches the director configuration script. Director installs additional packages and configures its services according to the configuration in the undercloud.conf . This script takes several minutes to complete. The script generates two files: undercloud-passwords.conf - A list of all passwords for the director services. stackrc - A set of initialization variables to help you access the director command line tools. The script also starts all OpenStack Platform service containers automatically. You can check the enabled containers with the following command: To initialize the stack user to use the command line tools, run the following command: The prompt now indicates that OpenStack commands authenticate and execute against the undercloud; The director installation is complete. You can now use the director command line tools. 4.8. Obtaining images for overcloud nodes Director requires several disk images to provision overcloud nodes: An introspection kernel and ramdisk for bare metal system introspection over PXE boot. A deployment kernel and ramdisk for system provisioning and deployment. An overcloud kernel, ramdisk, and full image. which form a base overcloud system that is written to the hard disk of the node. The following procedure shows how to obtain and install these images. 4.8.1. Single CPU architecture overclouds These images and procedures are necessary for deployment of the overcloud with the default CPU architecture, x86-64. Procedure Source the stackrc file to enable the director command line tools: Install the rhosp-director-images and rhosp-director-images-ipa packages: Extract the images archives to the images directory in the home directory of the stack user ( /home/stack/images ): Import these images into director: This script uploads the following images into director: overcloud-full overcloud-full-initrd overcloud-full-vmlinuz The script also installs the introspection images on the director PXE server. Verify that the images uploaded successfully: This list does not show the introspection PXE images. Director copies these files to /var/lib/ironic/httpboot . 4.8.2. Multiple CPU architecture overclouds These are the images and procedures that are necessary to deploy the overcloud to enable support of additional CPU architectures. The following example procedure uses the ppc64le image. Procedure Source the stackrc file to enable the director command line tools: Install the rhosp-director-images-all package: Extract the archives to an architecture specific directory in the images directory in the home directory of the stack user ( /home/stack/images ): Import these images into director: These commands upload the following images into director: overcloud-full overcloud-full-initrd overcloud-full-vmlinuz ppc64le-bm-deploy-kernel ppc64le-bm-deploy-ramdisk ppc64le-overcloud-full The script also installs the introspection images on the director PXE server. Verify that the images uploaded successfully: This list does not show the introspection PXE images. Director copies these files to /tftpboot . 4.8.3. Minimal overcloud image You can use the overcloud-minimal image to provision a bare OS where you do not want to run any other Red Hat OpenStack Platform services or consume one of your subscription entitlements. Procedure Source the stackrc file to enable the director command line tools: Install the overcloud-minimal package: Extract the images archives to the images directory in the home directory of the stack user ( /home/stack/images ): Import the images into director: This script uploads the following images into director: overcloud-minimal overcloud-minimal-initrd overcloud-minimal-vmlinuz Verify that the images uploaded successfully: Note The default overcloud-full.qcow2 image is a flat partition image. However, you can also import and use whole disk images. For more information, see Chapter 22, Creating whole disk images . 4.9. Setting a nameserver for the control plane If you intend for the overcloud to resolve external hostnames, such as cdn.redhat.com , set a nameserver on the overcloud nodes. For a standard overcloud without network isolation, the nameserver is defined using the undercloud control plane subnet. Complete the following procedure to define nameservers for the environment. Procedure Source the stackrc file to enable the director command line tools: Set the nameservers for the ctlplane-subnet subnet: Use the --dns-nameserver option for each nameserver. View the subnet to verify the nameserver: Important If you aim to isolate service traffic onto separate networks, the overcloud nodes use the DnsServers parameter in your network environment files. 4.10. Updating the undercloud configuration If you need to change the undercloud configuration to suit new requirements, you can make changes to your undercloud configuration after installation, edit the relevant configuration files and re-run the openstack undercloud install command. Procedure Modify the undercloud configuration files. For example, edit the undercloud.conf file and add the idrac hardware type to the list of enabled hardware types: Run the openstack undercloud install command to refresh your undercloud with the new changes: Wait until the command runs to completion. Initialize the stack user to use the command line tools,: The prompt now indicates that OpenStack commands authenticate and execute against the undercloud: Verify that director has applied the new configuration. For this example, check the list of enabled hardware types: The undercloud re-configuration is complete. 4.11. Undercloud container registry Red Hat Enterprise Linux 8.1 no longer includes the docker-distribution package, which installed a Docker Registry v2. To maintain the compatibility and the same level of feature, the director installation creates an Apache web server with a vhost called image-serve to provide a registry. This registry also uses port 8787/TCP with SSL disabled. The Apache-based registry is not containerized, which means that you must run the following command to restart the registry: You can find the container registry logs in the following locations: /var/log/httpd/image_serve_access.log /var/log/httpd/image_serve_error.log. The image content is served from /var/lib/image-serve . This location uses a specific directory layout and apache configuration to implement the pull function of the registry REST API. The Apache-based registry does not support podman push nor buildah push commands, which means that you cannot push container images using traditional methods. To modify images during deployment, use the container preparation workflow, such as the ContainerImagePrepare parameter. To manage container images, use the container management commands: sudo openstack tripleo container image list Lists all images stored on the registry. sudo openstack tripleo container image show Show metadata for a specific image on the registry. sudo openstack tripleo container image push Push an image from a remote registry to the undercloud registry. sudo openstack tripleo container image delete Delete an image from the registry. Note You must run all container image management commands with sudo level permissions. 4.12. steps Install an undercloud minion to scale undercloud services. See Chapter 5, Installing undercloud minions . Perform basic overcloud configuration, including registering nodes, inspecting them, and then tagging them into various node roles. For more information, see Chapter 7, Configuring a basic overcloud with CLI tools . | [
"[stack@director ~]USD cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf",
"2: em0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:75:24:09 brd ff:ff:ff:ff:ff:ff inet 192.168.122.178/24 brd 192.168.122.255 scope global dynamic em0 valid_lft 3462sec preferred_lft 3462sec inet6 fe80::5054:ff:fe75:2409/64 scope link valid_lft forever preferred_lft forever 3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop state DOWN link/ether 42:0b:c2:a5:c1:26 brd ff:ff:ff:ff:ff:ff",
"[ctlplane-subnet] cidr = 192.168.24.0/24 dhcp_start = 192.168.24.5 dhcp_end = 192.168.24.24 inspection_iprange = 192.168.24.100,192.168.24.120 gateway = 192.168.24.1 masquerade = true",
"parameter_defaults: Debug: True",
"custom_env_files = /home/stack/templates/custom-undercloud-params.yaml",
"parameter_defaults: Debug: True AdminPassword: \"myp@ssw0rd!\" AdminEmail: \"[email protected]\"",
"nova::compute::force_raw_images: False",
"nova::config::nova_config: DEFAULT/<parameter_name>: value: <parameter_value>",
"nova::config::nova_config: DEFAULT/network_allocate_retries: value: 20 ironic/serial_console_state_timeout: value: 15",
"hieradata_override = /home/stack/hieradata.yaml",
"ipv6_address_mode = dhcpv6-stateless enable_routed_networks: false local_ip = <ipv6-address> undercloud_admin_host = <ipv6-address> undercloud_public_host = <ipv6-address> [ctlplane-subnet] cidr = <ipv6-address>::<ipv6-mask> dhcp_start = <ipv6-address> dhcp_end = <ipv6-address> dns_nameservers = <ipv6-dns> gateway = <ipv6-address> inspection_iprange = <ipv6-address>,<ipv6-address>",
"[stack@director ~]USD openstack undercloud install",
"[stack@director ~]USD sudo podman ps",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD sudo dnf install rhosp-director-images rhosp-director-images-ipa",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.0.tar; do tar -xvf USDi; done",
"(undercloud) [stack@director images]USD openstack overcloud image upload --image-path /home/stack/images/",
"(undercloud) [stack@director images]USD openstack image list +--------------------------------------+------------------------+ | ID | Name | +--------------------------------------+------------------------+ | ef793cd0-e65c-456a-a675-63cd57610bd5 | overcloud-full | | 9a51a6cb-4670-40de-b64b-b70f4dd44152 | overcloud-full-initrd | | 4f7e33f4-d617-47c1-b36f-cbe90f132e5d | overcloud-full-vmlinuz | +--------------------------------------+------------------------+",
"(undercloud) [stack@director images]USD ls -l /var/lib/ironic/httpboot total 417296 -rwxr-xr-x. 1 root root 6639920 Jan 29 14:48 agent.kernel -rw-r--r--. 1 root root 420656424 Jan 29 14:48 agent.ramdisk -rw-r--r--. 1 42422 42422 758 Jan 29 14:29 boot.ipxe -rw-r--r--. 1 42422 42422 488 Jan 29 14:16 inspector.ipxe",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD sudo dnf install rhosp-director-images-all",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD for arch in x86_64 ppc64le ; do mkdir USDarch ; done (undercloud) [stack@director images]USD for arch in x86_64 ppc64le ; do for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.0-USD{arch}.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.0-USD{arch}.tar ; do tar -C USDarch -xf USDi ; done ; done",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD openstack overcloud image upload --image-path ~/images/ppc64le --architecture ppc64le --whole-disk --http-boot /var/lib/ironic/tftpboot/ppc64le (undercloud) [stack@director images]USD openstack overcloud image upload --image-path ~/images/x86_64/ --http-boot /var/lib/ironic/tftpboot",
"(undercloud) [stack@director images]USD openstack image list +--------------------------------------+---------------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------------+--------+ | 6a6096ba-8f79-4343-b77c-4349f7b94960 | overcloud-full | active | | de2a1bde-9351-40d2-bbd7-7ce9d6eb50d8 | overcloud-full-initrd | active | | 67073533-dd2a-4a95-8e8b-0f108f031092 | overcloud-full-vmlinuz | active | | 69a9ffe5-06dc-4d81-a122-e5d56ed46c98 | ppc64le-bm-deploy-kernel | active | | 464dd809-f130-4055-9a39-cf6b63c1944e | ppc64le-bm-deploy-ramdisk | active | | f0fedcd0-3f28-4b44-9c88-619419007a03 | ppc64le-overcloud-full | active | +--------------------------------------+---------------------------+--------+",
"(undercloud) [stack@director images]USD ls -l /var/lib/ironic/tftpboot /var/lib/ironic/tftpboot/ppc64le/ /var/lib/ironic/tftpboot: total 422624 -rwxr-xr-x. 1 root root 6385968 Aug 8 19:35 agent.kernel -rw-r--r--. 1 root root 425530268 Aug 8 19:35 agent.ramdisk -rwxr--r--. 1 ironic ironic 20832 Aug 8 02:08 chain.c32 -rwxr--r--. 1 ironic ironic 715584 Aug 8 02:06 ipxe.efi -rw-r--r--. 1 root root 22 Aug 8 02:06 map-file drwxr-xr-x. 2 ironic ironic 62 Aug 8 19:34 ppc64le -rwxr--r--. 1 ironic ironic 26826 Aug 8 02:08 pxelinux.0 drwxr-xr-x. 2 ironic ironic 21 Aug 8 02:06 pxelinux.cfg -rwxr--r--. 1 ironic ironic 69631 Aug 8 02:06 undionly.kpxe /var/lib/ironic/tftpboot/ppc64le/: total 457204 -rwxr-xr-x. 1 root root 19858896 Aug 8 19:34 agent.kernel -rw-r--r--. 1 root root 448311235 Aug 8 19:34 agent.ramdisk -rw-r--r--. 1 ironic-inspector ironic-inspector 336 Aug 8 02:06 default",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD sudo dnf install rhosp-director-images-minimal",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD tar xf /usr/share/rhosp-director-images/overcloud-minimal-latest-16.0.tar",
"(undercloud) [stack@director images]USD openstack overcloud image upload --image-path /home/stack/images/ --os-image-name overcloud-minimal.qcow2",
"(undercloud) [stack@director images]USD openstack image list +--------------------------------------+---------------------------+ | ID | Name | +--------------------------------------+---------------------------+ | ef793cd0-e65c-456a-a675-63cd57610bd5 | overcloud-full | | 9a51a6cb-4670-40de-b64b-b70f4dd44152 | overcloud-full-initrd | | 4f7e33f4-d617-47c1-b36f-cbe90f132e5d | overcloud-full-vmlinuz | | 32cf6771-b5df-4498-8f02-c3bd8bb93fdd | overcloud-minimal | | 600035af-dbbb-4985-8b24-a4e9da149ae5 | overcloud-minimal-initrd | | d45b0071-8006-472b-bbcc-458899e0d801 | overcloud-minimal-vmlinuz | +--------------------------------------+---------------------------+",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director images]USD openstack subnet set --dns-nameserver [nameserver1-ip] --dns-nameserver [nameserver2-ip] ctlplane-subnet",
"(undercloud) [stack@director images]USD openstack subnet show ctlplane-subnet +-------------------+-----------------------------------------------+ | Field | Value | +-------------------+-----------------------------------------------+ | ... | | | dns_nameservers | 8.8.8.8 | | ... | | +-------------------+-----------------------------------------------+",
"enabled_hardware_types = ipmi,redfish,idrac",
"[stack@director ~]USD openstack undercloud install",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD",
"(undercloud) [stack@director ~]USD openstack baremetal driver list +---------------------+----------------+ | Supported driver(s) | Active host(s) | +---------------------+----------------+ | idrac | unused | | ipmi | unused | | redfish | unused | +---------------------+----------------+",
"sudo systemctl restart httpd"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/installing-the-undercloud |
Chapter 8. Dynamic provisioning | Chapter 8. Dynamic provisioning 8.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs. 8.2. Available dynamic provisioning plugins OpenShift Container Platform provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plugin name Notes Red Hat OpenStack Platform (RHOSP) Cinder kubernetes.io/cinder RHOSP Manila Container Storage Interface (CSI) manila.csi.openstack.org Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. Amazon Elastic Block Store (Amazon EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder service account requires permissions to create and get secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. IBM Power(R) Virtual Server Block powervs.csi.ibm.com After installation, the IBM Power(R) Virtual Server Block CSI Driver Operator and IBM Power(R) Virtual Server Block CSI Driver automatically create the required storage classes for dynamic provisioning. VMware vSphere kubernetes.io/vsphere-volume Important Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. 8.3. Defining a storage class StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plugin types. 8.3.1. Basic StorageClass object definition The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition. Sample StorageClass definition kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' ... provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3 ... 1 (required) The API object type. 2 (required) The current apiVersion. 3 (required) The name of the storage class. 4 (optional) Annotations for the storage class. 5 (required) The type of provisioner associated with this storage class. 6 (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in. 8.3.2. Storage class annotations To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata: storageclass.kubernetes.io/is-default-class: "true" For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ... This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class. Note The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release. To set a storage class description, add the following annotation to your storage class metadata: kubernetes.io/description: My Storage Class Description For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description ... 8.3.3. RHOSP Cinder object definition cinder-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Volume type created in Cinder. Default is empty. 3 Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. 4 File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 8.3.4. RHOSP Manila Container Storage Interface (CSI) object definition Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. 8.3.5. AWS Elastic Block Store (EBS) object definition aws-ebs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: "10" 3 encrypted: "true" 4 kmsKeyId: keyvalue 5 fsType: ext4 6 1 (required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 (required) Select from io1 , gp3 , sc1 , st1 . The default is gp3 . See the AWS documentation for valid Amazon Resource Name (ARN) values. 3 Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. 4 Optional: Denotes whether to encrypt the EBS volume. Valid values are true or false . 5 Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. 6 Optional: File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 8.3.6. Azure Disk object definition azure-advanced-disk-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. 3 Possible values are Shared (default), Managed , and Dedicated . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. 4 Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks. If kind is set to Shared , Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster. If kind is set to Managed , Azure creates new managed disks. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work: The specified storage account must be in the same region. Azure Cloud Provider must have write access to the storage account. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster. 8.3.7. Azure File object definition The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure. Procedure Define a ClusterRole object that allows access to create and view secrets: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] 1 The name of the cluster role to view and create secrets. Add the cluster role to the service account: USD oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder Create the Azure File StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Location of the Azure storage account, such as eastus . Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster's location. 3 SKU tier of the Azure storage account, such as Standard_LRS . Default is empty, meaning that a new Azure storage account will be created with the Standard_LRS SKU. 4 Name of the Azure storage account. If a storage account is provided, then skuName and location are ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the defined skuName and location . 8.3.7.1. Considerations when using Azure File The following file system features are not supported by the default Azure File storage class: Symlinks Hard links Extended attributes Sparse files Named pipes Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid mount option can be specified in the StorageClass object to define a specific user identifier to use for the mounted directory. The following StorageClass object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate 1 Specifies the user identifier to use for the mounted directory. 2 Specifies the group identifier to use for the mounted directory. 3 Enables symlinks. 8.3.8. GCE PersistentDisk (gcePD) object definition gce-pd-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Select either pd-standard or pd-ssd . The default is pd-standard . 8.3.9. VMware vSphere object definition vsphere-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: csi.vsphere.vmware.com 2 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 For more information about using VMware vSphere CSI with OpenShift Container Platform, see the Kubernetes documentation . 8.4. Changing the default storage class Use the following procedure to change the default storage class. For example, if you have two defined storage classes, gp3 and standard , and you want to change the default storage class from gp3 to standard . Prerequisites Access to the cluster with cluster-admin privileges. Procedure To change the default storage class: List the storage classes: USD oc get storageclass Example output NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) indicates the default storage class. Make the desired storage class the default. For the desired storage class, set the storageclass.kubernetes.io/is-default-class annotation to true by running the following command: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Note You can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class ( pvc.spec.storageClassName =nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses . Remove the default storage class setting from the old default storage class. For the old default storage class, change the value of the storageclass.kubernetes.io/is-default-class annotation to false by running the following command: USD oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs | [
"kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"",
"kubernetes.io/description: My Storage Class Description",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']",
"oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: csi.vsphere.vmware.com 2",
"oc get storageclass",
"NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc get storageclass",
"NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage/dynamic-provisioning |
A. Revision History | A. Revision History Revision History Revision 2.0-2 2022-03-23 Navinya Shende Updates examples with Hydra API Revision 2.0-1.400 2013-12-18 Rudiger Landmann Rebuild with publican 4.0.0 Revision 2.0-1 Fri Nov 1 2013 Zac Dover Publish for new site Revision 0.0-0 Tue Sep 24 2013 Misty Stanley-Jones Converted existing document to Docbook | null | https://docs.redhat.com/en/documentation/red_hat_customer_portal/1/html/customer_portal_integration_guide/appe-integration_guide-revision_history |
Chapter 5. Performing cross-site operations via JMX | Chapter 5. Performing cross-site operations via JMX Perform cross-site operations such as pushing state transfer and bringing sites online via JMX. 5.1. Registering JMX MBeans Data Grid can register JMX MBeans that you can use to collect statistics and perform administrative operations. You must also enable statistics otherwise Data Grid provides 0 values for all statistic attributes in JMX MBeans. Important Use JMX Mbeans for collecting statistics only when Data Grid is embedded in applications and not with a remote Data Grid server. When you use JMX Mbeans for collecting statistics from a remote Data Grid server, the data received from JMX Mbeans might differ from the data received from other APIs such as REST. In such cases the data received from the other APIs is more accurate. Procedure Open your Data Grid configuration for editing. Add the jmx element or object to the cache container and specify true as the value for the enabled attribute or field. Add the domain attribute or field and specify the domain where JMX MBeans are exposed, if required. Save and close your client configuration. JMX configuration XML <infinispan> <cache-container statistics="true"> <jmx enabled="true" domain="example.com"/> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "jmx" : { "enabled" : "true", "domain" : "example.com" } } } } YAML infinispan: cacheContainer: statistics: "true" jmx: enabled: "true" domain: "example.com" 5.2. Performing cross-site operations with JMX clients Perform cross-site operations with JMX clients. Prerequisites Configure Data Grid to register JMX MBeans Procedure Connect to Data Grid with any JMX client. Invoke operations from the following MBeans: XSiteAdmin provides cross-site operations for caches. GlobalXSiteAdminOperations provides cross-site operations for Cache Managers. For example, to bring sites back online, invoke bringSiteOnline(siteName) . Additional resources XSiteAdmin MBean GlobalXSiteAdminOperations MBean 5.3. JMX MBeans for cross-site replication Data Grid provides JMX MBeans for cross-site replication that let you gather statistics and perform remote operations. The org.infinispan:type=Cache component provides the following JMX MBeans: XSiteAdmin exposes cross-site operations that apply to specific cache instances. RpcManager provides statistics about network requests for cross-site replication. AsyncXSiteStatistics provides statistics for asynchronous cross-site replication, including queue size and number of conflicts. The org.infinispan:type=CacheManager component includes the following JMX MBean: GlobalXSiteAdminOperations exposes cross-site operations that apply to all caches in a cache container. For details about JMX MBeans along with descriptions of available operations and statistics, see the Data Grid JMX Components documentation. Additional resources Data Grid JMX Components | [
"<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\"/> </cache-container> </infinispan>",
"{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\" } } } }",
"infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\""
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_cross-site_replication/cross-site-operations-jmx |
Chapter 14. Integrating with Splunk | Chapter 14. Integrating with Splunk If you are using Splunk , you can forward alerts from Red Hat Advanced Cluster Security for Kubernetes to Splunk and view the violations, vulnerability detection, and compliance related data from within Splunk. Important Currently, Splunk integration is not supported on IBM Power( ppc64le ) and IBM Z( s390x ). Depending on your use case, you can integrate Red Hat Advanced Cluster Security for Kubernetes with Splunk by using the following ways: By using an HTTP event collector in Splunk: Use the event collector option to forward alerts and audit log data. By using the Red Hat Advanced Cluster Security for Kubernetes add-on : Use the add-on to pull the violations, vulnerability detection, and compliance data into Splunk. You can use one or both of these integration options to integrate the Red Hat Advanced Cluster Security for Kubernetes with Splunk. 14.1. Using the HTTP event collector You can forward alerts from Red Hat Advanced Cluster Security for Kubernetes to Splunk by using an HTTP event collector. To integrate Red Hat Advanced Cluster Security for Kubernetes with Splunk by using the HTTP event collector, follow these steps: Add a new HTTP event collector in Splunk and get the token value. Use the token value to set up notifications in Red Hat Advanced Cluster Security for Kubernetes. Identify policies for which you want to send notifications, and update the notification settings for those policies. 14.1.1. Adding an HTTP event collector in Splunk Add a new HTTP event collector for your Splunk instance, and get the token. Procedure In your Splunk dashboard, go to Settings Add Data . Click Monitor . On the Add Data page, click HTTP Event Collector . Enter a Name for the event collector and then click > . Accept the default Input Settings and click Review > . Review the event collector properties and click Submit > . Copy the Token Value for the event collector. You need this token value to configure integration with Splunk in Red Hat Advanced Cluster Security for Kubernetes. 14.1.1.1. Enabling HTTP event collector You must enable HTTP event collector tokens before you can receive events. Procedure In your Splunk dashboard, go to Settings Data inputs . Click HTTP Event Collector . Click Global Settings . In the dialog that opens, click Enabled and then click Save . 14.1.2. Configuring Splunk integration in Red Hat Advanced Cluster Security for Kubernetes Create a new Splunk integration in Red Hat Advanced Cluster Security for Kubernetes by using the token value. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Splunk . Click New Integration ( add icon). Enter a name for Integration Name . Enter your Splunk URL in the HTTP Event Collector URL field. You must specify the port number if it is not 443 for HTTPS or 80 for HTTP. You must also add the URL path /services/collector/event at the end of the URL. For example, https://<splunk-server-path>:8088/services/collector/event . Enter your token in the HTTP Event Collector Token field. Note If you are using Red Hat Advanced Cluster Security for Kubernetes version 3.0.57 or newer, you can specify custom Source Type for Alert events and Source Type for Audit events. Select Test to send a test message to verify that the integration with Splunk is working. Select Create to generate the configuration. 14.1.3. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the Splunk notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment. 14.2. Using the Red Hat Advanced Cluster Security for Kubernetes add-on You can use the Red Hat Advanced Cluster Security for Kubernetes add-on to forward the vulnerability detection and compliance related data from the Red Hat Advanced Cluster Security for Kubernetes to Splunk. Generate an API token with read permission for all resources in Red Hat Advanced Cluster Security for Kubernetes and then use that token to install and configure the add-on. 14.2.1. Installing and configuring the Splunk add-on You can install the Red Hat Advanced Cluster Security for Kubernetes add-on from your Splunk instance. Note To maintain backward compatibility with the StackRox Kubernetes Security Platform add-on, the source_type and input_type parameters for configured inputs are still called stackrox_compliance , stackrox_violations , and stackrox_vulnerability_management . Prerequisites You must have an API token with read permission for all resources of Red Hat Advanced Cluster Security for Kubernetes. You can assign the Analyst system role to grant this level of access. The Analyst role has read permissions for all resources. Procedure Download the Red Hat Advanced Cluster Security for Kubernetes add-on from Splunkbase . Go to the Splunk home page on your Splunk instance. Go to Apps Manage Apps . Select Install app from file . In the Upload app pop-up box, select Choose File and select the Red Hat Advanced Cluster Security for Kubernetes add-on file. Click Upload . Click Restart Splunk , and confirm to restart. After Splunk restarts, select Red Hat Advanced Cluster Security for Kubernetes from the Apps menu. Go to Configuration and then click Add-on Settings . For Central Endpoint , enter the IP address or the name of your Central instance. For example, central.custom:443 . Enter the API token you have generated for the add-on. Click Save . Go to Inputs . Click Create New Input , and select one of the following: ACS Compliance to pull the compliance data. ACS Violations to pull the violations data. ACS Vulnerability Management to pull the vulnerabilities data. Enter a Name for the input. Select an Interval to pull data from Red Hat Advanced Cluster Security for Kubernetes. For example, every 14400 seconds. Select the Splunk Index to which you want to send the data. For Central Endpoint , enter the IP address or the name of your Central instance. Enter the API token you have generated for the add-on. Click Add . Verification To verify the the Red Hat Advanced Cluster Security for Kubernetes add-on installation, query the received data. In your Splunk instance, go to Search and type index=* sourcetype="stackrox-*" as the query. Press Enter . Verify that your configured sources are displayed in the search results. 14.2.2. Update the StackRox Kubernetes Security Platform add-on If you are using the StackRox Kubernetes Security Platform add-on, you must upgrade to the new Red Hat Advanced Cluster Security for Kubernetes add-on. You can see the update notification on the Splunk homepage under the list of apps on the left. Alternatively, you can also go to the Apps Manage apps page to see the update notification. Prerequisites You must have an API token with read permission for all resources of Red Hat Advanced Cluster Security for Kubernetes. You can assign the Analyst system role to grant this level of access. The Analyst role has read permissions for all the resources. Procedure Click Update on the update notification. Select the checkbox for accepting the terms and conditions, and then click Accept and Continue to install the update. After the installation, select Red Hat Advanced Cluster Security for Kubernetes from the Apps menu. Go to Configuration and then click Add-on Settings . Enter the API token you have generated for the add-on. Click Save . 14.2.3. Troubleshoot the Splunk add-on If you stop receiving events from the Red Hat Advanced Cluster Security for Kubernetes add-on, check the Splunk add-on debug logs for errors. Splunk creates a debug log file for every configured input in the /opt/splunk/var/log/splunk directory. Find the file named stackrox_<input>_<uid>.log , for example, stackrox_compliance_29a3e14798aa2363d.log and look for issues. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrate-with-splunk |
Chapter 3. Deploying AMQ Broker on OpenShift Container Platform using the AMQ Broker Operator | Chapter 3. Deploying AMQ Broker on OpenShift Container Platform using the AMQ Broker Operator 3.1. Prerequisites Before you install the Operator and use it to create a broker deployment, you should consult the Operator deployment notes in Section 2.9, "Operator deployment notes" . 3.2. Installing the Operator using the CLI Note Each Operator release requires that you download the latest AMQ Broker 7.12.3 Operator Installation and Example Files as described below. The procedures in this section show how to use the OpenShift command-line interface (CLI) to install and deploy the latest version of the Operator for AMQ Broker 7.12 in a given OpenShift project. In subsequent procedures, you use this Operator to deploy some broker instances. For an alternative method of installing the AMQ Broker Operator that uses the OperatorHub graphical interface, see Section 3.3, "Installing the Operator using OperatorHub" . To learn about upgrading existing Operator-based broker deployments, see Chapter 6, Upgrading an Operator-based broker deployment . 3.2.1. Preparing to deploy the Operator Before you deploy the Operator using the CLI, you must download the Operator installation files and prepare the deployment. Procedure In your web browser, navigate to the Software Downloads page for AMQ Broker 7.12.3 releases . Ensure that the value of the Version drop-down list is set to 7.12.3 and the Releases tab is selected. to the latest AMQ Broker 7.12.3 Operator Installation and Example Files , click Download . Download of the amq-broker-operator-7.12.3-ocp-install-examples-rhel8.zip compressed archive automatically begins. Move the archive to your chosen directory. The following example moves the archive to a directory called ~/broker/operator . USD mkdir ~/broker/operator USD mv amq-broker-operator-7.12.3-ocp-install-examples-rhel8.zip ~/broker/operator In your chosen directory, extract the contents of the archive. For example: USD cd ~/broker/operator USD unzip amq-broker-operator-7.12.3-ocp-install-examples-rhel8.zip Switch to the directory that was created when you extracted the archive. For example: USD cd amq-broker-operator-7.12.3-ocp-install-examples Log in to OpenShift Container Platform as a cluster administrator. For example: USD oc login -u system:admin Specify the project in which you want to install the Operator. You can create a new project or switch to an existing one. Create a new project: USD oc new-project <project_name> Or, switch to an existing project: USD oc project <project_name> Specify a service account to use with the Operator. In the deploy directory of the Operator archive that you extracted, open the service_account.yaml file. Ensure that the kind element is set to ServiceAccount . If you want to change the default service account name, in the metadata section, replace amq-broker-controller-manager with a custom name. Create the service account in your project. USD oc create -f deploy/service_account.yaml Specify a role name for the Operator. Open the role.yaml file. This file specifies the resources that the Operator can use and modify. Ensure that the kind element is set to Role . If you want to change the default role name, in the metadata section, replace amq-broker-operator-role with a custom name. Create the role in your project. USD oc create -f deploy/role.yaml Specify a role binding for the Operator. The role binding binds the previously-created service account to the Operator role, based on the names you specified. Open the role_binding.yaml file. Ensure that the name values for ServiceAccount and Role match those specified in the service_account.yaml and role.yaml files. For example: metadata: name: amq-broker-operator-rolebinding subjects: kind: ServiceAccount name: amq-broker-controller-manager roleRef: kind: Role name: amq-broker-operator-role Create the role binding in your project. USD oc create -f deploy/role_binding.yaml Specify a leader election role binding for the Operator. The role binding binds the previously-created service account to the leader election role, based on the names you specified. Create a leader election role for the Operator. USD oc create -f deploy/election_role.yaml Create the leader election role binding in your project. USD oc create -f deploy/election_role_binding.yaml (Optional) If you want the Operator to watch multiple namespaces, complete the following steps: Note If the OpenShift Container Platform cluster already contains installed Operators for AMQ Broker, you must ensure the new Operator does not watch any of the same namespaces as existing Operators. For information on how to identify the namespaces that are watched by existing Operators, see, Identifying namespaces watched by existing Operators . In the deploy directory of the Operator archive that you downloaded and extracted, open the operator_yaml file. If you want the Operator to watch all namespaces in the cluster, in the WATCH_NAMESPACE section, add a value attribute and set the value to an asterisk. Comment out the existing attributes in the WATCH_NAMESPACE section. For example: - name: WATCH_NAMESPACE value: "*" # valueFrom: # fieldRef: # fieldPath: metadata.namespace Note To avoid conflicts, ensure that multiple Operators do not watch the same namespace. For example, if you deploy an Operator to watch all namespaces on the cluster, you cannot deploy another Operator to watch individual namespaces. If Operators are already deployed on the cluster, you can specify a list of namespaces that the new Operator watches, as described in the following step. If you want the Operator to watch multiple, but not all, namespaces on the cluster, in the WATCH_NAMESPACE section, specify a list of namespaces. Ensure that you exclude any namespaces that are watched by existing Operators. For example: - name: WATCH_NAMESPACE value: "namespace1, namespace2"`. In the deploy directory of the Operator archive that you downloaded and extracted, open the cluster_role_binding.yaml file. In the Subjects section, specify a namespace that corresponds to the OpenShift Container Platform project to which you are deploying the Operator. For example: Subjects: - kind: ServiceAccount name: amq-broker-controller-manager namespace: operator-project Note If you previously deployed brokers using an earlier version of the Operator, and you want to deploy the Operator to watch multiple namespaces, see Before you upgrade . Create a cluster role in your project. USD oc create -f deploy/cluster_role.yaml Create a cluster role binding in your project. USD oc create -f deploy/cluster_role_binding.yaml In the procedure that follows, you deploy the Operator in your project. 3.2.2. Deploying the Operator using the CLI The procedure in this section shows how to use the OpenShift command-line interface (CLI) to deploy the latest version of the Operator for AMQ Broker 7.12 in your OpenShift project. Prerequisites You must have already prepared your OpenShift project for the Operator deployment. See Section 3.2.1, "Preparing to deploy the Operator" . Starting in AMQ Broker 7.3, you use a new version of the Red Hat Ecosystem Catalog to access container images. This new version of the registry requires you to become an authenticated user before you can access images. Before you can follow the procedure in this section, you must first complete the steps described in Red Hat Container Registry Authentication . If you intend to deploy brokers with persistent storage and do not have container-native storage in your OpenShift cluster, you need to manually provision Persistent Volumes (PVs) and ensure that they are available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage (that is, by setting persistenceEnabled=true in your Custom Resource), you need to have two PVs available. By default, each broker instance requires storage of 2 GiB. If you specify persistenceEnabled=false in your Custom Resource, the deployed brokers uses ephemeral storage. Ephemeral storage means that that every time you restart the broker Pods, any existing data is lost. For more information about provisioning persistent storage, see: Understanding persistent storage Procedure In the OpenShift command-line interface (CLI), log in to OpenShift as a cluster administrator. For example: USD oc login -u system:admin Switch to the project that you previously prepared for the Operator deployment. For example: USD oc project <project_name> Switch to the directory that was created when you previously extracted the Operator installation archive. For example: USD cd ~/broker/operator/amq-broker-operator-7.12.3-ocp-install-examples Deploy the CRDs that are included with the Operator. You must install the CRDs in your OpenShift cluster before deploying and starting the Operator. Deploy the main broker CRD. USD oc create -f deploy/crds/broker_activemqartemis_crd.yaml Deploy the address CRD. USD oc create -f deploy/crds/broker_activemqartemisaddress_crd.yaml Deploy the scaledown controller CRD. USD oc create -f deploy/crds/broker_activemqartemisscaledown_crd.yaml Deploy the security CRD: USD oc create -f deploy/crds/broker_activemqartemissecurity_crd.yaml Link the pull secret associated with the account used for authentication in the Red Hat Ecosystem Catalog with the default , deployer , and builder service accounts for your OpenShift project. USD oc secrets link --for=pull default <secret_name> USD oc secrets link --for=pull deployer <secret_name> USD oc secrets link --for=pull builder <secret_name> In the deploy directory of the Operator archive that you downloaded and extracted, open the operator.yaml file. Ensure that the value of the spec.containers.image property corresponds to version 7.12.3-opr-1 of the Operator, as shown below. spec: template: spec: containers: #image: registry.redhat.io/amq7/amq-broker-rhel8-operator:7.10 image: registry.redhat.io/amq7/amq-broker-rhel8-operator@sha256:1fd01079ad519e1a47b886893a0635491759ace2f73eda7615a9c8c2f454ba89 Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Deploy the Operator. USD oc create -f deploy/operator.yaml In your OpenShift project, the Operator starts in a new Pod. In the OpenShift Container Platform web console, the information on the Events tab of the Operator Pod confirms that OpenShift has deployed the Operator image that you specified, has assigned a new container to a node in your OpenShift cluster, and has started the new container. In addition, if you click the Logs tab within the Pod, the output should include lines resembling the following: The preceding output confirms that the newly-deployed Operator is communicating with Kubernetes, that the controllers for the broker and addressing are running, and that these controllers have started some workers. Note It is recommended that you deploy only a single instance of the AMQ Broker Operator in a given OpenShift project. Setting the spec.replicas property of your Operator deployment to a value greater than 1 , or deploying the Operator more than once in the same project is not recommended. Additional resources For an alternative method of installing the AMQ Broker Operator that uses the OperatorHub graphical interface, see Section 3.3, "Installing the Operator using OperatorHub" . 3.3. Installing the Operator using OperatorHub 3.3.1. Overview of the Operator Lifecycle Manager In OpenShift Container Platform 4.5 and later, the Operator Lifecycle Manager (OLM) helps users install, update, and generally manage the lifecycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes-native applications (Operators) in an effective, automated, and scalable way. The OLM runs by default in OpenShift Container Platform 4.5 and later, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. OperatorHub is the graphical interface that OpenShift cluster administrators use to discover, install, and upgrade Operators using the OLM. With one click, these Operators can be pulled from OperatorHub, installed on the cluster, and managed by the OLM, ready for engineering teams to self-service manage the software in development, test, and production environments. When you have deployed the Operator, you can use Custom Resource (CR) instances to create broker deployments such as standalone and clustered brokers. 3.3.2. Deploying the Operator from OperatorHub This procedure shows how to use OperatorHub to deploy the latest version of the Operator for AMQ Broker to a specified OpenShift project. Note In OperatorHub, you can install only the latest Operator version that is provided in each channel. If you want to install an earlier version of an Operator, you must install the Operator by using the CLI. For more information, see Section 3.2, "Installing the Operator using the CLI" . Prerequisites The Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator must be available in OperatorHub. You have cluster administrator privileges. Procedure Log in to the OpenShift Container Platform web console as a cluster administrator. In left navigation menu, click Operators OperatorHub . On the Project drop-down menu at the top of the OperatorHub page, select the project in which you want to deploy the Operator. On the OperatorHub page, use the Filter by keyword... box to find the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. Note In OperatorHub, you might find more than one Operator than includes AMQ Broker in its name. Ensure that you click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. When you click this Operator, review the information pane that opens. For AMQ Broker 7.12, the latest minor version tag of this Operator is 7.12.3-opr-1 . Click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. On the dialog box that appears, click Install . On the Install Operator page: Under Update Channel , select the 7.11.x channel to receive updates for version 7.11 only. The 7.11.x channel is a Long Term Support (LTS) channel. Depending on when your OpenShift Container Platform cluster was installed, you may also see channels for older versions of AMQ Broker. The only other supported channel is 7.10.x , which is also an LTS channel. Under Installation Mode , choose which namespaces the Operator watches: A specific namespace on the cluster - The Operator is installed in that namespace and only monitors that namespace for CR changes. All namespaces - The Operator monitors all namespaces for CR changes. Note If you previously deployed brokers using an earlier version of the Operator, and you want deploy the Operator to watch many namespaces, see Before you upgrade . From the Installed Namespace drop-down menu, select the project in which you want to install the Operator. Under Approval Strategy , ensure that the radio button entitled Automatic is selected. This option specifies that updates to the Operator do not require manual approval for installation to take place. Note The approval strategy applies only to updates between micro versions of the Operator. Automatic updates between minor Operator versions are not supported. For example, if the current Operator is version 7.11.7, an automatic update to version 7.12.x is not possible. To update between minor versions of the Operator, you must manually uninstall the current Operator and install the new Operator from the channel where Operators for that minor version are made available. For more information, see Section 6.3, "Manually upgrading the Operator using OperatorHub" . Click Install . When the Operator installation is complete, the Installed Operators page opens. You should see that the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator is installed in the project namespace that you specified. Additional resources To learn how to create a broker deployment in a project that has the Operator for AMQ Broker installed, see Section 3.4.1, "Deploying a basic broker instance" . 3.4. Creating Operator-based broker deployments 3.4.1. Deploying a basic broker instance The following procedure shows how to use a Custom Resource (CR) instance to create a basic broker deployment. Note While you can create more than one broker deployment in a given OpenShift project by deploying multiple Custom Resource (CR) instances, typically, you create a single broker deployment in a project, and then deploy multiple CR instances for addresses. Red Hat recommends you create broker deployments in separate projects. In AMQ Broker 7.12, if you want to configure the following items, you must add the appropriate configuration to the main broker CR instance before deploying the CR for the first time. The size and storage class of the Persistent Volume Claim (PVC) required by each broker in a deployment for persistent storage Limits and requests for memory and CPU for each broker in a deployment Prerequisites You must have already installed the AMQ Broker Operator. To use the OpenShift command-line interface (CLI) to install the AMQ Broker Operator, see Section 3.2, "Installing the Operator using the CLI" . To use the OperatorHub graphical interface to install the AMQ Broker Operator, see Section 3.3, "Installing the Operator using OperatorHub" . You should understand how the Operator chooses a broker container image to use for your broker deployment. For more information, see Section 2.7, "How the Operator chooses container images" . Starting in AMQ Broker 7.3, you use a new version of the Red Hat Ecosystem Catalog to access container images. This new version of the registry requires you to become an authenticated user before you can access images. Before you can follow the procedure in this section, you must first complete the steps described in Red Hat Container Registry Authentication . Procedure When you have successfully installed the Operator, the Operator is running and listening for changes related to your CRs. This example procedure shows how to use a CR instance to deploy a basic broker in your project. Start configuring a Custom Resource (CR) instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.7, "How the Operator chooses container images" . Note The broker_activemqartemis_cr.yaml sample CR uses a naming convention of ex-aao . This naming convention denotes that the CR is an example resource for the AMQ Broker Operator . AMQ Broker is based on the ActiveMQ Artemis project. When you deploy this sample CR, the resulting StatefulSet uses the name ex-aao-ss . Furthermore, broker Pods in the deployment are directly based on the StatefulSet name, for example, ex-aao-ss-0 , ex-aao-ss-1 , and so on. The application name in the CR appears in the deployment as a label on the StatefulSet. You might use this label in a Pod selector, for example. The size property specifies the number of brokers to deploy. A value of 2 or greater specifies a clustered broker deployment. However, to deploy a single broker instance, ensure that the value is set to 1 . Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . In the OpenShift Container Platform web console, click Workloads StatefulSets . You see a new StatefulSet called ex-aao-ss . Click the ex-aao-ss StatefulSet. You see that there is one Pod, corresponding to the single broker that you defined in the CR. Within the StatefulSet, click the Pods tab. Click the ex-aao-ss Pod. On the Events tab of the running Pod, you see that the broker container has started. The Logs tab shows that the broker itself is running. To test that the broker is running normally, access a shell on the broker Pod to send some test messages. Using the OpenShift Container Platform web console: Click Workloads Pods . Click the ex-aao-ss Pod. Click the Terminal tab. Using the OpenShift command-line interface: Get the Pod names and internal IP addresses for your project. Access the shell for the broker Pod. From the shell, use the artemis command to send some test messages. Specify the internal IP address of the broker Pod in the URL. For example: The preceding command automatically creates a queue called demoQueue on the broker and sends a default quantity of 1000 messages to the queue. You should see output that resembles the following: Additional resources For a complete configuration reference for the main broker Custom Resource (CR), see Section 8.1, "Custom Resource configuration reference" . To learn how to connect a running broker to AMQ Management Console, see Chapter 5, Connecting to AMQ Management Console for an Operator-based broker deployment . 3.4.2. Deploying clustered brokers If there are two or more broker Pods running in your project, the Pods automatically form a broker cluster. A clustered configuration enables brokers to connect to each other and redistribute messages as needed, for load balancing. The following procedure shows you how to deploy clustered brokers. By default, the brokers in this deployment use on demand load balancing, meaning that brokers will forward messages only to other brokers that have matching consumers. Prerequisites A basic broker instance is already deployed. See Section 3.4.1, "Deploying a basic broker instance" . Procedure Open the CR file that you used for your basic broker deployment. For a clustered deployment, ensure that the value of deploymentPlan.size is 2 or greater. For example: apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao spec: deploymentPlan: size: 4 image: placeholder ... Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. Save the modified CR file. Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you previously created your basic broker deployment. Switch to the project in which you previously created your basic broker deployment. At the command line, apply the change: USD oc apply -f <path/to/custom_resource_instance> .yaml In the OpenShift Container Platform web console, additional broker Pods starts in your project, according to the number specified in your CR. By default, the brokers running in the project are clustered. Open the Logs tab of each Pod. The logs show that OpenShift has established a cluster connection bridge on each broker. Specifically, the log output includes a line like the following: 3.4.3. Applying Custom Resource changes to running broker deployments The following are some important things to note about applying Custom Resource (CR) changes to running broker deployments: You cannot dynamically update the persistenceEnabled attribute in your CR. To change this attribute, scale your cluster down to zero brokers. Delete the existing CR. Then, recreate and redeploy the CR with your changes, also specifying a deployment size. As described in Section 3.2.2, "Deploying the Operator using the CLI" , if you create a broker deployment with persistent storage (that is, by setting persistenceEnabled=true in your CR), you might need to provision Persistent Volumes (PVs) for the AMQ Broker Operator to claim for your broker Pods. If you scale down the size of your broker deployment, the Operator releases any PVs that it previously claimed for the broker Pods that are now shut down. However, if you remove your broker deployment by deleting your CR, AMQ Broker Operator does not release Persistent Volume Claims (PVCs) for any broker Pods that are still in the deployment when you remove it. In addition, these unreleased PVs are unavailable to any new deployment. In this case, you need to manually release the volumes. For more information, see Release a persistent volume in the OpenShift documentation. In AMQ Broker 7.12, if you want to configure the following items, you must add the appropriate configuration to the main CR instance before deploying the CR for the first time. The size and storage class of the Persistent Volume Claim (PVC) required by each broker in a deployment for persistent storage . Limits and requests for memory and CPU for each broker in a deployment . During an active scaling event, any further changes that you apply are queued by the Operator and executed only when scaling is complete. For example, suppose that you scale the size of your deployment down from four brokers to one. Then, while scaledown is taking place, you also change the values of the broker administrator user name and password. In this case, the Operator queues the user name and password changes until the deployment is running with one active broker. All CR changes - apart from changing the size of your deployment, or changing the value of the expose attribute for acceptors, connectors, or the console - cause existing brokers to be restarted. If you have multiple brokers in your deployment, only one broker restarts at a time. 3.5. Changing the logging level for the Operator The default logging level for AMQ Broker Operator is info , which logs information and error messages. You can change the default logging level to increase or decrease the detail that is written to the Operator logs. If you use the OpenShift Container Platform command-line interface to install the Operator, you can set the new logging level in the Operator configuration file, operator.yaml , either before or after you install the Operator. If you use Operator Hub, you can use the OpenShift Container Platform web console to set the logging level in the Operator subscription after you install the Operator. The other available logging levels for the Operator are: error Writes error messages only to the log. debug Write all messages to the log including debugging messages. Procedure Using the OpenShift Container Platform command-line interface: Log in as a cluster administrator. For example: USD oc login -u system:admin If the Operator is not installed, complete the following steps to change the logging level. In the deploy directory of the Operator archive that you downloaded and extracted, open the operator.yaml file. Change the value of the zap-log-level attribute to debug or error . For example: apiVersion: apps/v1 kind: Deployment metadata: labels: control-plane: controller-manager name: amq-broker-controller-manager spec: containers: - args: - --zap-log-level=error ... Save the operator.yaml file. Install the Operator. If the Operator is already installed, use the sed command to change the logging level in the deploy/operator.yaml file and redeploy the Operator. For example, the following command changes the logging level from info to error and redeploys the Operator: USD sed 's/--zap-log-level=info/--zap-log-level=error/' deploy/operator.yaml | oc apply -f - Using the OpenShift Container Platform web console: Log in to the OpenShift Container Platform as a cluster administrator. In the left pane, click Operators Installed Operators . Click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. Click the Subscriptions tab. Click Actions . Click Edit Subscription . Click the YAML tab. Within the console, a YAML editor opens, enabling you to edit the subscription. In the config element, add an environment variable called ARGS and specify a logging level of info , debug or error . In the following example, an ARGS environment variable that specifies a logging level of debug is passed to the Operator container. apiVersion: operators.coreos.com/v1alpha1 kind: Subscription spec: ... config: env: - name: ARGS value: "--zap-log-level=debug" ... Click Save. 3.6. Configuring leader election settings for the operator You can customize the settings used by the AMQ Broker operator for leader elections. If you use the OpenShift Container Platform command-line interface to install the operator, you can configure the leader elections settings in the operator configuration file, operator.yaml , either before or after installation. If you use OperatorHub, you can use the OpenShift Container Platform web console to configure the leader elections settings in the operator subscription after installation. Procedure Using the OpenShift Container Platform web console: Log in to the OpenShift Container Platform as a cluster administrator. In the left pane, click Operators Installed Operators . Click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. Click the Subscriptions tab. Click Actions . Click Edit Subscription . Click the YAML tab. Within the console, a YAML editor opens, enabling you to edit the subscription. In the config section, add an environment variable named ARGS and specify the leader election settings in the variable value. For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription spec: .. config: env: - name: ARGS value: "--lease-duration=18 --renew-deadline=12 --retry-period=3" Click Save . lease-duration The duration, in seconds, that a non-leader operator waits before it attempts to acquire the lease that was not renewed by the leader. The default is 15. renew-deadline The duration, in seconds, the operator waits between attempts to renew the leader role before it stops leading. The default is 10. retry-period The duration, in seconds, that the operator waits between attempts to acquire and renew the leader role. The default is 2. Using the OpenShift Container Platform command-line interface: Log in as a cluster administrator. For example: USD oc login -u system:admin In the deploy directory of the operator archive that you downloaded and extracted, open the operator.yaml file. Set the values of the leader election settings. For example: apiVersion: apps/v1 kind: Deployment ... template .. spec: containers: - args: - --lease-duration=60 - --renew-deadline=40 - --retry-period=5 .. Save the operator.yaml file. If the operator is already installed, apply the updated settings. USD oc apply -f deploy/operator.yaml If the operator is not installed, install the operator. 3.7. Viewing status information for your broker deployment You can view the status of a series of standard conditions reported by OpenShift Container Platform for your broker deployment. You can also view additional status information provided in the Custom Resource (CR) for your broker deployment. Procedure Open the CR instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift Container Platform as a user that has privileges to view CRs in the project for the broker deployment. View the CR for your deployment. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. In the left pane, click Operators Installed Operator . Click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) operator. Click the ActiveMQ Artemis tab. Click the name of the ActiveMQ Artemis instance. View the status of the OpenShift Container Platform conditions for your broker deployment. Using the OpenShift command-line interface: Go to the status section of the CR and view the conditions details. Using the OpenShift Container Platform web console: In the Details tab, scroll down to the Conditions section. A condition has a status and a type. It might also have a reason, a message and other details. A condition has a status value of True if the condition is met, False if the condition is not met, or Unknown if the status of the condition cannot be determined. The Valid condition can also have a status of Unknown to flag an anomaly in the configuration that does not affect the broker deployment. For more information, see Section 2.8, "Validation of image and version configuration in a custom resource (CR)" . Status information is provided for the following conditions: Table 3.1. Status information for a broker deployment Condition name Displays the status of... Valid The validation of the CR. If the status of the Valid condition is False , the Operator does not complete the reconciliation and update the StatefulSet until you first resolve the issue that caused the false status. Deployed The availability of the StatefulSet, Pods and other resources. Ready A top-level condition which summarizes the other more detailed conditions. The Ready condition has a status of True only if none of the other conditions have a status of False . BrokerPropertiesApplied The properties configured in the CR that use the brokerProperties attribute. For more information about the BrokerPropertiesApplied condition, see Section 2.4, "Configuring items not exposed in a custom resource definition (CRD)" . JaasPropertiesApplied The Java Authentication and Authorization Service (JAAS) login modules configured in the CR. For more information about the JaasPropertiesApplied condition, see Section 4.3.1, "Configuring JAAS login modules in a secret" . View additional status information for your broker deployment in the status section of the CR. The following additional status information is displayed: deploymentPlanSize The number of broker Pods in the deployment. podstatus The status and name of each broker pod in the deployment. version The version of the broker and the registry URLs of the broker and init container images that are deployed. upgrade The ability of the Operator to apply major, minor, patch and security updates to the deployment, which is determined by the values of the spec.deploymentPlan.image and spec.version attributes in the CR. If the spec.deploymentPlan.image attribute specifies the registry URL of a broker container image, the status of all upgrade types is False , which means that the Operator cannot upgrade the existing container images. If the spec.deploymentPlan.image attribute is not in the CR or has a value of placeholder , the configuration of the spec.version attribute affects the upgrade status as follows: The status of securityUpdates is True , irrespective of whether the spec.version attribute is configured or its value. The status of patchUpdates is True if the value of the spec.version attribute has only a major and a minor version, for example, '7.10', so the Operator can upgrade to the latest patch version of the container images. The status of minorUpdates is True if the value of the spec.version attribute has only a major version, for example, '7', so the Operator can upgrade to the latest minor and patch versions of the container images. The status of majorUpdates is True if the spec.version attribute is not in the CR, so any available upgrades can be deployed, including an upgrade from 7.x.x to 8.x.x, if this version is available. | [
"mkdir ~/broker/operator mv amq-broker-operator-7.12.3-ocp-install-examples-rhel8.zip ~/broker/operator",
"cd ~/broker/operator unzip amq-broker-operator-7.12.3-ocp-install-examples-rhel8.zip",
"cd amq-broker-operator-7.12.3-ocp-install-examples",
"oc login -u system:admin",
"oc new-project <project_name>",
"oc project <project_name>",
"oc create -f deploy/service_account.yaml",
"oc create -f deploy/role.yaml",
"metadata: name: amq-broker-operator-rolebinding subjects: kind: ServiceAccount name: amq-broker-controller-manager roleRef: kind: Role name: amq-broker-operator-role",
"oc create -f deploy/role_binding.yaml",
"oc create -f deploy/election_role.yaml",
"oc create -f deploy/election_role_binding.yaml",
"- name: WATCH_NAMESPACE value: \"*\" valueFrom: fieldRef: fieldPath: metadata.namespace",
"- name: WATCH_NAMESPACE value: \"namespace1, namespace2\"`.",
"Subjects: - kind: ServiceAccount name: amq-broker-controller-manager namespace: operator-project",
"oc create -f deploy/cluster_role.yaml",
"oc create -f deploy/cluster_role_binding.yaml",
"oc login -u system:admin",
"oc project <project_name>",
"cd ~/broker/operator/amq-broker-operator-7.12.3-ocp-install-examples",
"oc create -f deploy/crds/broker_activemqartemis_crd.yaml",
"oc create -f deploy/crds/broker_activemqartemisaddress_crd.yaml",
"oc create -f deploy/crds/broker_activemqartemisscaledown_crd.yaml",
"oc create -f deploy/crds/broker_activemqartemissecurity_crd.yaml",
"oc secrets link --for=pull default <secret_name> oc secrets link --for=pull deployer <secret_name> oc secrets link --for=pull builder <secret_name>",
"spec: template: spec: containers: #image: registry.redhat.io/amq7/amq-broker-rhel8-operator:7.10 image: registry.redhat.io/amq7/amq-broker-rhel8-operator@sha256:1fd01079ad519e1a47b886893a0635491759ace2f73eda7615a9c8c2f454ba89",
"oc create -f deploy/operator.yaml",
"{\"level\":\"info\",\"ts\":1553619035.8302743,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting Controller\",\"controller\":\"activemqartemisaddress-controller\"} {\"level\":\"info\",\"ts\":1553619035.830541,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting Controller\",\"controller\":\"activemqartemis-controller\"} {\"level\":\"info\",\"ts\":1553619035.9306898,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting workers\",\"controller\":\"activemqartemisaddress-controller\",\"worker count\":1} {\"level\":\"info\",\"ts\":1553619035.9311671,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting workers\",\"controller\":\"activemqartemis-controller\",\"worker count\":1}",
"login -u <user> -p <password> --server= <host:port>",
"apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true",
"oc project <project_name>",
"oc create -f <path/to/custom_resource_instance> .yaml",
"oc get pods -o wide NAME STATUS IP amq-broker-operator-54d996c Running 10.129.2.14 ex-aao-ss-0 Running 10.129.2.15",
"oc rsh ex-aao-ss-0",
"sh-4.2USD ./amq-broker/bin/artemis producer --url tcp://10.129.2.15:61616 --destination queue://demoQueue",
"Connection brokerURL = tcp://10.129.2.15:61616 Producer ActiveMQQueue[demoQueue], thread=0 Started to calculate elapsed time Producer ActiveMQQueue[demoQueue], thread=0 Produced: 1000 messages Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in second : 3 s Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in milli second : 3492 milli seconds",
"apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao spec: deploymentPlan: size: 4 image: placeholder",
"oc login -u <user> -p <password> --server= <host:port>",
"oc project <project_name>",
"oc apply -f <path/to/custom_resource_instance> .yaml",
"targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@6f13fb88",
"oc login -u system:admin",
"apiVersion: apps/v1 kind: Deployment metadata: labels: control-plane: controller-manager name: amq-broker-controller-manager spec: containers: - args: - --zap-log-level=error",
"sed 's/--zap-log-level=info/--zap-log-level=error/' deploy/operator.yaml | oc apply -f -",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription spec: config: env: - name: ARGS value: \"--zap-log-level=debug\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription spec: .. config: env: - name: ARGS value: \"--lease-duration=18 --renew-deadline=12 --retry-period=3\"",
"oc login -u system:admin",
"apiVersion: apps/v1 kind: Deployment template .. spec: containers: - args: - --lease-duration=60 - --renew-deadline=40 - --retry-period=5 ..",
"oc apply -f deploy/operator.yaml",
"get ActiveMQArtemis < CR instance name > -n < namespace > -o yaml"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/deploying_amq_broker_on_openshift/deploying-broker-on-ocp-using-operator_broker-ocp |
25.3.2. Actions | 25.3.2. Actions Actions specify what is to be done with the messages filtered out by an already-defined selector. The following are some of the actions you can define in your rule: Saving syslog messages to log files The majority of actions specify to which log file a syslog message is saved. This is done by specifying a file path after your already-defined selector: FILTER PATH where FILTER stands for user-specified selector and PATH is a path of a target file. For instance, the following rule is comprised of a selector that selects all cron syslog messages and an action that saves them into the /var/log/cron.log log file: cron.* /var/log/cron.log By default, the log file is synchronized every time a syslog message is generated. Use a dash mark ( - ) as a prefix of the file path you specified to omit syncing: FILTER - PATH Note that you might lose information if the system terminates right after a write attempt. However, this setting can improve performance, especially if you run programs that produce very verbose log messages. Your specified file path can be either static or dynamic . Static files are represented by a fixed file path as shown in the example above. Dynamic file paths can differ according to the received message. Dynamic file paths are represented by a template and a question mark ( ? ) prefix: FILTER ? DynamicFile where DynamicFile is a name of a predefined template that modifies output paths. You can use the dash prefix ( - ) to disable syncing, also you can use multiple templates separated by a colon ( ; ). For more information on templates, see the section called "Generating Dynamic File Names" . If the file you specified is an existing terminal or /dev/console device, syslog messages are sent to standard output (using special terminal -handling) or your console (using special /dev/console -handling) when using the X Window System, respectively. Sending syslog messages over the network rsyslog allows you to send and receive syslog messages over the network. This feature allows you to administer syslog messages of multiple hosts on one machine. To forward syslog messages to a remote machine, use the following syntax: where: The at sign ( @ ) indicates that the syslog messages are forwarded to a host using the UDP protocol. To use the TCP protocol, use two at signs with no space between them ( @@ ). The optional z NUMBER setting enables zlib compression for syslog messages. The NUMBER attribute specifies the level of compression (from 1 - lowest to 9 - maximum). Compression gain is automatically checked by rsyslogd , messages are compressed only if there is any compression gain and messages below 60 bytes are never compressed. The HOST attribute specifies the host which receives the selected syslog messages. The PORT attribute specifies the host machine's port. When specifying an IPv6 address as the host, enclose the address in square brackets ( [ , ] ). Example 25.4. Sending syslog Messages over the Network The following are some examples of actions that forward syslog messages over the network (note that all actions are preceded with a selector that selects all messages with any priority). To forward messages to 192.168.0.1 via the UDP protocol, type: To forward messages to "example.com" using port 6514 and the TCP protocol, use: The following compresses messages with zlib (level 9 compression) and forwards them to 2001:db8::1 using the UDP protocol Output channels Output channels are primarily used to specify the maximum size a log file can grow to. This is very useful for log file rotation (for more information see Section 25.3.5, "Log Rotation" ). An output channel is basically a collection of information about the output action. Output channels are defined by the USDoutchannel directive. To define an output channel in /etc/rsyslog.conf , use the following syntax: where: The NAME attribute specifies the name of the output channel. The FILE_NAME attribute specifies the name of the output file. Output channels can write only into files, not pipes, terminal, or other kind of output. The MAX_SIZE attribute represents the maximum size the specified file (in FILE_NAME ) can grow to. This value is specified in bytes . The ACTION attribute specifies the action that is taken when the maximum size, defined in MAX_SIZE , is hit. To use the defined output channel as an action inside a rule, type: FILTER :omfile:USD NAME Example 25.5. Output channel log rotation The following output shows a simple log rotation through the use of an output channel. First, the output channel is defined via the USDoutchannel directive: and then it is used in a rule that selects every syslog message with any priority and executes the previously-defined output channel on the acquired syslog messages: Once the limit (in the example 100 MB ) is hit, the /home/joe/log_rotation_script is executed. This script can contain anything from moving the file into a different folder, editing specific content out of it, or simply removing it. Sending syslog messages to specific users rsyslog can send syslog messages to specific users by specifying a user name of the user you want to send the messages to (as in Example 25.7, "Specifying Multiple Actions" ). To specify more than one user, separate each user name with a comma ( , ). To send messages to every user that is currently logged on, use an asterisk ( * ). Executing a program rsyslog lets you execute a program for selected syslog messages and uses the system() call to execute the program in shell. To specify a program to be executed, prefix it with a caret character ( ^ ). Consequently, specify a template that formats the received message and passes it to the specified executable as a one line parameter (for more information on templates, see Section 25.3.3, "Templates" ). FILTER ^ EXECUTABLE ; TEMPLATE Here an output of the FILTER condition is processed by a program represented by EXECUTABLE . This program can be any valid executable. Replace TEMPLATE with the name of the formatting template. Example 25.6. Executing a Program In the following example, any syslog message with any priority is selected, formatted with the template template and passed as a parameter to the test-program program, which is then executed with the provided parameter: Warning When accepting messages from any host, and using the shell execute action, you may be vulnerable to command injection. An attacker may try to inject and execute commands in the program you specified to be executed in your action. To avoid any possible security threats, thoroughly consider the use of the shell execute action. Storing syslog messages in a database Selected syslog messages can be directly written into a database table using the database writer action. The database writer uses the following syntax: where: The PLUGIN calls the specified plug-in that handles the database writing (for example, the ommysql plug-in). The DB_HOST attribute specifies the database host name. The DB_NAME attribute specifies the name of the database. The DB_USER attribute specifies the database user. The DB_PASSWORD attribute specifies the password used with the aforementioned database user. The TEMPLATE attribute specifies an optional use of a template that modifies the syslog message. For more information on templates, see Section 25.3.3, "Templates" . Important Currently, rsyslog provides support for MySQL and PostgreSQL databases only. In order to use the MySQL and PostgreSQL database writer functionality, install the rsyslog-mysql and rsyslog-pgsql packages, respectively. Also, make sure you load the appropriate modules in your /etc/rsyslog.conf configuration file: For more information on rsyslog modules, see Section 25.7, "Using Rsyslog Modules" . Alternatively, you may use a generic database interface provided by the omlibdb module (supports: Firebird/Interbase, MS SQL, Sybase, SQLLite, Ingres, Oracle, mSQL). Discarding syslog messages To discard your selected messages, use the tilde character ( ~ ). FILTER ~ The discard action is mostly used to filter out messages before carrying on any further processing. It can be effective if you want to omit some repeating messages that would otherwise fill the log files. The results of discard action depend on where in the configuration file it is specified, for the best results place these actions on top of the actions list. Please note that once a message has been discarded there is no way to retrieve it in later configuration file lines. For instance, the following rule discards any cron syslog messages: Specifying Multiple Actions For each selector, you are allowed to specify multiple actions. To specify multiple actions for one selector, write each action on a separate line and precede it with an ampersand (&) character: Specifying multiple actions improves the overall performance of the desired outcome since the specified selector has to be evaluated only once. Example 25.7. Specifying Multiple Actions In the following example, all kernel syslog messages with the critical priority ( crit ) are sent to user user1 , processed by the template temp and passed on to the test-program executable, and forwarded to 192.168.0.1 via the UDP protocol. Any action can be followed by a template that formats the message. To specify a template, suffix an action with a semicolon ( ; ) and specify the name of the template. For more information on templates, see Section 25.3.3, "Templates" . Warning A template must be defined before it is used in an action, otherwise it is ignored. In other words, template definitions should always precede rule definitions in /etc/rsyslog.conf . | [
"@[ ( z NUMBER ) ] HOST :[ PORT ]",
"*.* @192.168.0.1",
"*.* @@example.com:6514",
"*.* @(z9)[2001:db8::1]",
"USDoutchannel NAME , FILE_NAME , MAX_SIZE , ACTION",
"USDoutchannel log_rotation, /var/log/test_log.log, 104857600, /home/joe/log_rotation_script",
"*.* :omfile:USDlog_rotation",
"*.* ^test-program;template",
": PLUGIN : DB_HOST , DB_NAME , DB_USER , DB_PASSWORD ;[ TEMPLATE ]",
"USDModLoad ommysql # Output module for MySQL support USDModLoad ompgsql # Output module for PostgreSQL support",
"cron.* ~",
"FILTER ACTION & ACTION & ACTION",
"kern.=crit user1 & ^test-program;temp & @192.168.0.1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-Actions |
Appendix B. The LVM Configuration Files | Appendix B. The LVM Configuration Files LVM supports multiple configuration files. At system startup, the lvm.conf configuration file is loaded from the directory specified by the environment variable LVM_SYSTEM_DIR , which is set to /etc/lvm by default. The lvm.conf file can specify additional configuration files to load. Settings in later files override settings from earlier ones. To display the settings in use after loading all the configuration files, execute the lvmconfig command. For information on loading additional configuration files, see Section D.2, "Host Tags" . B.1. The LVM Configuration Files The following files are used for LVM configuration: /etc/lvm/lvm.conf Central configuration file read by the tools. etc/lvm/lvm_ hosttag .conf For each host tag, an extra configuration file is read if it exists: lvm_ hosttag .conf . If that file defines new tags, then further configuration files will be appended to the list of files to read in. For information on host tags, see Section D.2, "Host Tags" . In addition to the LVM configuration files, a system running LVM includes the following files that affect LVM system setup: /etc/lvm/cache/.cache Device name filter cache file (configurable). /etc/lvm/backup/ Directory for automatic volume group metadata backups (configurable). /etc/lvm/archive/ Directory for automatic volume group metadata archives (configurable with regard to directory path and archive history depth). /var/lock/lvm/ In single-host configuration, lock files to prevent parallel tool runs from corrupting the metadata; in a cluster, cluster-wide DLM is used. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/config_file |
10.2. Memory Policy Elements | 10.2. Memory Policy Elements The memory_policy element contains the following elements: Table 10.2. Memory policy elements Element Type Description Properties overcommit percent= complex The percentage of host memory allowed in use before no more virtual machines can start on a host. Virtual machines can use more than the available host memory due to memory sharing under KSM. Recommended values include 100 (None), 150 (Server Load) and 200 (Desktop Load). transparent_hugepages complex Define the enabled status of Transparent Hugepages. The status is either true or false. Check capabilities feature set to ensure your version supports transparent hugepages . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/memory_policy_elements |
5.9.10. Logical Volume Management | 5.9.10. Logical Volume Management Red Hat Enterprise Linux includes support for LVM. LVM may be configured while Red Hat Enterprise Linux is installed, or it may be configured after the installation is complete. LVM under Red Hat Enterprise Linux supports physical storage grouping, logical volume resizing, and the migration of data off a specific physical volume. For more information on LVM, refer to the System Administrators Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-storage-lvm |
3.3. Managing Groups via the User Manager Application | 3.3. Managing Groups via the User Manager Application 3.3.1. Viewing Groups In order to display the main window of User Manager to view groups, from the toolbar select Edit Preferences . If you want to view all the groups, clear the Hide system users and groups check box. The Groups tab provides a list of local groups with information about their group ID and group members as you can see in the picture below. Figure 3.3. Viewing Groups To find a specific group, type the first few letters of the name in the Search filter field and either press Enter , or click the Apply filter button. You can also sort the items according to any of the available columns by clicking the column header. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-groups-configui |
Chapter 2. CSIDriver [storage.k8s.io/v1] | Chapter 2. CSIDriver [storage.k8s.io/v1] Description CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata. metadata.Name indicates the name of the CSI driver that this object refers to; it MUST be the same name returned by the CSI GetPluginName() call for that driver. The driver name must be 63 characters or less, beginning and ending with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), dots (.), and alphanumerics between. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CSIDriverSpec is the specification of a CSIDriver. 2.1.1. .spec Description CSIDriverSpec is the specification of a CSIDriver. Type object Property Type Description attachRequired boolean attachRequired indicates this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume() method), and that the Kubernetes attach detach controller should call the attach volume interface which checks the volumeattachment status and waits until the volume is attached before proceeding to mounting. The CSI external-attacher coordinates with CSI volume driver and updates the volumeattachment status when the attach operation is complete. If the CSIDriverRegistry feature gate is enabled and the value is specified to false, the attach operation will be skipped. Otherwise the attach operation will be called. This field is immutable. fsGroupPolicy string fsGroupPolicy defines if the underlying volume supports changing ownership and permission of the volume before being mounted. Refer to the specific FSGroupPolicy values for additional details. This field was immutable in Kubernetes < 1.29 and now is mutable. Defaults to ReadWriteOnceWithFSType, which will examine each volume to determine if Kubernetes should modify ownership and permissions of the volume. With the default policy the defined fsGroup will only be applied if a fstype is defined and the volume's access mode contains ReadWriteOnce. podInfoOnMount boolean podInfoOnMount indicates this CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount operations, if set to true. If set to false, pod information will not be passed on mount. Default is false. The CSI driver specifies podInfoOnMount as part of driver deployment. If true, Kubelet will pass pod information as VolumeContext in the CSI NodePublishVolume() calls. The CSI driver is responsible for parsing and validating the information passed in as VolumeContext. The following VolumeContext will be passed if podInfoOnMount is set to true. This list might grow, but the prefix will be used. "csi.storage.k8s.io/pod.name": pod.Name "csi.storage.k8s.io/pod.namespace": pod.Namespace "csi.storage.k8s.io/pod.uid": string(pod.UID) "csi.storage.k8s.io/ephemeral": "true" if the volume is an ephemeral inline volume defined by a CSIVolumeSource, otherwise "false" "csi.storage.k8s.io/ephemeral" is a new feature in Kubernetes 1.16. It is only required for drivers which support both the "Persistent" and "Ephemeral" VolumeLifecycleMode. Other drivers can leave pod info disabled and/or ignore this field. As Kubernetes 1.15 doesn't support this field, drivers can only support one mode when deployed on such a cluster and the deployment determines which mode that is, for example via a command line parameter of the driver. This field was immutable in Kubernetes < 1.29 and now is mutable. requiresRepublish boolean requiresRepublish indicates the CSI driver wants NodePublishVolume being periodically called to reflect any possible change in the mounted volume. This field defaults to false. Note: After a successful initial NodePublishVolume call, subsequent calls to NodePublishVolume should only update the contents of the volume. New mount points will not be seen by a running container. seLinuxMount boolean seLinuxMount specifies if the CSI driver supports "-o context" mount option. When "true", the CSI driver must ensure that all volumes provided by this CSI driver can be mounted separately with different -o context options. This is typical for storage backends that provide volumes as filesystems on block devices or as independent shared volumes. Kubernetes will call NodeStage / NodePublish with "-o context=xyz" mount option when mounting a ReadWriteOncePod volume used in Pod that has explicitly set SELinux context. In the future, it may be expanded to other volume AccessModes. In any case, Kubernetes will ensure that the volume is mounted only with a single SELinux context. When "false", Kubernetes won't pass any special SELinux mount options to the driver. This is typical for volumes that represent subdirectories of a bigger shared filesystem. Default is "false". storageCapacity boolean storageCapacity indicates that the CSI volume driver wants pod scheduling to consider the storage capacity that the driver deployment will report by creating CSIStorageCapacity objects with capacity information, if set to true. The check can be enabled immediately when deploying a driver. In that case, provisioning new volumes with late binding will pause until the driver deployment has published some suitable CSIStorageCapacity object. Alternatively, the driver can be deployed with the field unset or false and it can be flipped later when storage capacity information has been published. This field was immutable in Kubernetes ⇐ 1.22 and now is mutable. tokenRequests array tokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: "csi.storage.k8s.io/serviceAccount.tokens": { "<audience>": { "token": <token>, "expirationTimestamp": <expiration timestamp in RFC3339>, }, ... } Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically. tokenRequests[] object TokenRequest contains parameters of a service account token. volumeLifecycleModes array (string) volumeLifecycleModes defines what kind of volumes this CSI volume driver supports. The default if the list is empty is "Persistent", which is the usage defined by the CSI specification and implemented in Kubernetes via the usual PV/PVC mechanism. The other mode is "Ephemeral". In this mode, volumes are defined inline inside the pod spec with CSIVolumeSource and their lifecycle is tied to the lifecycle of that pod. A driver has to be aware of this because it is only going to get a NodePublishVolume call for such a volume. For more information about implementing this mode, see https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html A driver can support one or more of these modes and more modes may be added in the future. This field is beta. This field is immutable. 2.1.2. .spec.tokenRequests Description tokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: "csi.storage.k8s.io/serviceAccount.tokens": { "<audience>": { "token": <token>, "expirationTimestamp": <expiration timestamp in RFC3339>, }, ... } Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically. Type array 2.1.3. .spec.tokenRequests[] Description TokenRequest contains parameters of a service account token. Type object Required audience Property Type Description audience string audience is the intended audience of the token in "TokenRequestSpec". It will default to the audiences of kube apiserver. expirationSeconds integer expirationSeconds is the duration of validity of the token in "TokenRequestSpec". It has the same default value of "ExpirationSeconds" in "TokenRequestSpec". 2.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csidrivers DELETE : delete collection of CSIDriver GET : list or watch objects of kind CSIDriver POST : create a CSIDriver /apis/storage.k8s.io/v1/watch/csidrivers GET : watch individual changes to a list of CSIDriver. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/csidrivers/{name} DELETE : delete a CSIDriver GET : read the specified CSIDriver PATCH : partially update the specified CSIDriver PUT : replace the specified CSIDriver /apis/storage.k8s.io/v1/watch/csidrivers/{name} GET : watch changes to an object of kind CSIDriver. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/storage.k8s.io/v1/csidrivers HTTP method DELETE Description delete collection of CSIDriver Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSIDriver Table 2.3. HTTP responses HTTP code Reponse body 200 - OK CSIDriverList schema 401 - Unauthorized Empty HTTP method POST Description create a CSIDriver Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body CSIDriver schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 202 - Accepted CSIDriver schema 401 - Unauthorized Empty 2.2.2. /apis/storage.k8s.io/v1/watch/csidrivers HTTP method GET Description watch individual changes to a list of CSIDriver. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/storage.k8s.io/v1/csidrivers/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the CSIDriver HTTP method DELETE Description delete a CSIDriver Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 202 - Accepted CSIDriver schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSIDriver Table 2.11. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSIDriver Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSIDriver Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body CSIDriver schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 401 - Unauthorized Empty 2.2.4. /apis/storage.k8s.io/v1/watch/csidrivers/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the CSIDriver HTTP method GET Description watch changes to an object of kind CSIDriver. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/storage_apis/csidriver-storage-k8s-io-v1 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_openshift/making-open-source-more-inclusive |
Chapter 7. Logging | Chapter 7. Logging 7.1. Enabling protocol logging The client can log AMQP protocol frames to the console. This data is often critical when diagnosing problems. To enable protocol logging, set the PN_TRACE_FRM environment variable to 1 : Example: Enabling protocol logging USD export PN_TRACE_FRM=1 USD <your-client-program> To disable protocol logging, unset the PN_TRACE_FRM environment variable. | [
"export PN_TRACE_FRM=1 <your-client-program>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_ruby_client/logging |
4.4. Managing Cluster Nodes | 4.4. Managing Cluster Nodes The following sections describe the commands you use to manage cluster nodes, including commands to start and stop cluster services and to add and remove cluster nodes. 4.4.1. Stopping Cluster Services The following command stops cluster services on the specified node or nodes. As with the pcs cluster start , the --all option stops cluster services on all nodes and if you do not specify any nodes, cluster services are stopped on the local node only. You can force a stop of cluster services on the local node with the following command, which performs a kill -9 command. 4.4.2. Enabling and Disabling Cluster Services Use the following command to configure the cluster services to run on startup on the specified node or nodes. If you specify the --all option, the command enables cluster services on all nodes. If you do not specify any nodes, cluster services are enabled on the local node only. Use the following command to configure the cluster services not to run on startup on the specified node or nodes. If you specify the --all option, the command disables cluster services on all nodes. If you do not specify any nodes, cluster services are disabled on the local node only. 4.4.3. Adding Cluster Nodes Note It is highly recommended that you add nodes to existing clusters only during a production maintenance window. This allows you to perform appropriate resource and deployment testing for the new node and its fencing configuration. Use the following procedure to add a new node to an existing cluster. In this example, the existing cluster nodes are clusternode-01.example.com , clusternode-02.example.com , and clusternode-03.example.com . The new node is newnode.example.com . On the new node to add to the cluster, perform the following tasks. Install the cluster packages. If the cluster uses SBD, the Booth ticket manager, or a quorum device, you must manually install the respective packages ( sbd , booth-site , corosync-qdevice ) on the new node as well. If you are running the firewalld daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On. Set a password for the user ID hacluster . It is recommended that you use the same password for each node in the cluster. Execute the following commands to start the pcsd service and to enable pcsd at system start. On a node in the existing cluster, perform the following tasks. Authenticate user hacluster on the new cluster node. Add the new node to the existing cluster. This command also syncs the cluster configuration file corosync.conf to all nodes in the cluster, including the new node you are adding. On the new node to add to the cluster, perform the following tasks. Start and enable cluster services on the new node. Ensure that you configure and test a fencing device for the new cluster node. For information on configuring fencing devices, see Chapter 5, Fencing: Configuring STONITH . 4.4.4. Removing Cluster Nodes The following command shuts down the specified node and removes it from the cluster configuration file, corosync.conf , on all of the other nodes in the cluster. For information on removing all information about the cluster from the cluster nodes entirely, thereby destroying the cluster permanently, see Section 4.6, "Removing the Cluster Configuration" . 4.4.5. Standby Mode The following command puts the specified node into standby mode. The specified node is no longer able to host resources. Any resources currently active on the node will be moved to another node. If you specify the --all , this command puts all nodes into standby mode. You can use this command when updating a resource's packages. You can also use this command when testing a configuration, to simulate recovery without actually shutting down a node. The following command removes the specified node from standby mode. After running this command, the specified node is then able to host resources. If you specify the --all , this command removes all nodes from standby mode. Note that when you execute the pcs cluster standby command, this prevents resources from running on the indicated node. When you execute the pcs cluster unstandby command, this allows resources to run on the indicated node. This does not necessarily move the resources back to the indicated node; where the resources can run at that point depends on how you have configured your resources initially. For information on resource constraints, see Chapter 7, Resource Constraints . | [
"pcs cluster stop [--all] [ node ] [...]",
"pcs cluster kill",
"pcs cluster enable [--all] [ node ] [...]",
"pcs cluster disable [--all] [ node ] [...]",
"yum install -y pcs fence-agents-all",
"firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability",
"passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.",
"systemctl start pcsd.service systemctl enable pcsd.service",
"pcs cluster auth newnode.example.com Username: hacluster Password: newnode.example.com: Authorized",
"pcs cluster node add newnode.example.com",
"pcs cluster start Starting Cluster pcs cluster enable",
"pcs cluster node remove node",
"pcs cluster standby node | --all",
"pcs cluster unstandby node | --all"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-clusternodemanage-HAAR |
Chapter 14. Managing guest virtual machines with virsh | Chapter 14. Managing guest virtual machines with virsh virsh is a command line interface tool for managing guest virtual machines and the hypervisor. The virsh command-line tool is built on the libvirt management API and operates as an alternative to the qemu-kvm command and the graphical virt-manager application. The virsh command can be used in read-only mode by unprivileged users or, with root access, full administration functionality. The virsh command is ideal for scripting virtualization administration. 14.1. Generic Commands The commands in this section are generic because they are not specific to any domain. 14.1.1. help USD virsh help [command|group] The help command can be used with or without options. When used without options, all commands are listed, one per line. When used with an option, it is grouped into categories, displaying the keyword for each group. To display the commands that are only for a specific option, you need to give the keyword for that group as an option. For example: Using the same command with a command option, gives the help information on that one specific command. For example: 14.1.2. quit and exit The quit command and the exit command will close the terminal. For example: 14.1.3. version The version command displays the current libvirt version and displays information about where the build is from. For example: 14.1.4. Argument Display The virsh echo [--shell][--xml][arg] command echos or displays the specified argument. Each argument echoed will be separated by a space. by using the --shell option, the output will be single quoted where needed so that it is suitable for reusing in a shell command. If the --xml option is used the output will be made suitable for use in an XML file. For example, the command virsh echo --shell "hello world" will send the output 'hello world' . 14.1.5. connect Connects to a hypervisor session. When the shell is first started this command runs automatically when the URI parameter is requested by the -c command. The URI specifies how to connect to the hypervisor. The most commonly used URIs are: xen:/// - connects to the local Xen hypervisor. qemu:///system - connects locally as root to the daemon supervising QEMU and KVM domains. xen:///session - connects locally as a user to the user's set of QEMU and KVM domains. lxc:/// - connects to a local Linux container. Additional values are available on libvirt's website http://libvirt.org/uri.html . The command can be run as follows: Where {name} is the machine name (host name) or URL (the output of the virsh uri command) of the hypervisor. To initiate a read-only connection, append the above command with --readonly . For more information on URIs refer to Remote URIs . If you are unsure of the URI, the virsh uri command will display it: 14.1.6. Displaying Basic Information The following commands may be used to display basic information: USD hostname - displays the hypervisor's host name USD sysinfo - displays the XML representation of the hypervisor's system information, if available 14.1.7. Injecting NMI The USD virsh inject-nmi [domain] injects NMI (non-maskable interrupt) message to the guest virtual machine. This is used when response time is critical, such as non-recoverable hardware errors. To run this command: | [
"virsh help pool Storage Pool (help keyword 'pool'): find-storage-pool-sources-as find potential storage pool sources find-storage-pool-sources discover potential storage pool sources pool-autostart autostart a pool pool-build build a pool pool-create-as create a pool from a set of args pool-create create a pool from an XML file pool-define-as define a pool from a set of args pool-define define (but don't start) a pool from an XML file pool-delete delete a pool pool-destroy destroy (stop) a pool pool-dumpxml pool information in XML pool-edit edit XML configuration for a storage pool pool-info storage pool information pool-list list pools pool-name convert a pool UUID to pool name pool-refresh refresh a pool pool-start start a (previously defined) inactive pool pool-undefine undefine an inactive pool pool-uuid convert a pool name to pool UUID",
"virsh help vol-path NAME vol-path - returns the volume path for a given volume name or key SYNOPSIS vol-path <vol> [--pool <string>] OPTIONS [--vol] <string> volume name or key --pool <string> pool name or uuid",
"virsh exit",
"virsh quit",
"virsh version Compiled against library: libvirt 1.1.1 Using library: libvirt 1.1.1 Using API: QEMU 1.1.1 Running hypervisor: QEMU 1.5.3",
"virsh connect {name|URI}",
"virsh uri qemu:///session",
"virsh inject-nmi guest-1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-virtualization_administration_guide-managing_guests_with_virsh |
9.18.2. Rescue Mode | 9.18.2. Rescue Mode Rescue mode provides the ability to boot a small Red Hat Enterprise Linux environment entirely from boot media or some other boot method instead of the system's hard drive. There may be times when you are unable to get Red Hat Enterprise Linux running completely enough to access files on your system's hard drive. Using rescue mode, you can access the files stored on your system's hard drive, even if you cannot actually run Red Hat Enterprise Linux from that hard drive. If you need to use rescue mode, try the following method: Boot an x86, AMD64, or Intel 64 system from any installation medium, such as CD, DVD, USB, or PXE, and type linux rescue at the installation boot prompt. Refer to Chapter 36, Basic System Recovery for a more complete description of rescue mode. For additional information, refer to the Red Hat Enterprise Linux Deployment Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-x86-bootloader-rescue |
Chapter 4. Preparing for Installation | Chapter 4. Preparing for Installation 4.1. Preparing for a Network Installation Note Make sure no installation DVD (or any other type of DVD or CD) is in your system's CD or DVD drive if you are performing a network-based installation. Having a DVD or CD in the drive might cause unexpected errors. Ensure that you have boot media available on CD, DVD, or a USB storage device such as a flash drive. The Red Hat Enterprise Linux installation medium must be available for either a network installation (via NFS, FTP, HTTP, or HTTPS) or installation via local storage. Use the following steps if you are performing an NFS, FTP, HTTP, or HTTPS installation. The NFS, FTP, HTTP, or HTTPS server to be used for installation over the network must be a separate, network-accessible server. It must provide the complete contents of the installation DVD-ROM. Note anaconda has the ability to test the integrity of the installation media. It works with the DVD, hard drive ISO, and NFS ISO installation methods. We recommend that you test all installation media before starting the installation process, and before reporting any installation-related bugs (many of the bugs reported are actually due to improperly-burned DVDs). To use this test, type the following command at the boot: prompt: Note The public directory used to access the installation files over FTP, NFS, HTTP, or HTTPS is mapped to local storage on the network server. For example, the local directory /var/www/inst/rhel6.9 on the network server can be accessed as http://network.server.com/inst/rhel6.9 . In the following examples, the directory on the installation staging server that will contain the installation files will be specified as /location/of/disk/space . The directory that will be made publicly available via FTP, NFS, HTTP, or HTTPS will be specified as /publicly_available_directory . For example, /location/of/disk/space may be a directory you create called /var/isos . /publicly_available_directory might be /var/www/html/rhel6.9 , for an HTTP install. In the following, you will require an ISO image . An ISO image is a file containing an exact copy of the content of a DVD. To create an ISO image from a DVD use the following command: where dvd is your DVD drive device, name_of_image is the name you give to the resulting ISO image file, and path_to_image is the path to the location on your system where the resulting ISO image will be stored. To copy the files from the installation DVD to a Linux instance, which acts as an installation staging server, continue with either Section 4.1.1, "Preparing for FTP, HTTP, and HTTPS Installation" or Section 4.1.2, "Preparing for an NFS Installation" . 4.1.1. Preparing for FTP, HTTP, and HTTPS Installation Warning If your Apache web server or tftp FTP server configuration enables SSL security, make sure to only enable the TLSv1 protocol, and disable SSLv2 and SSLv3 . This is due to the POODLE SSL vulnerability (CVE-2014-3566). See https://access.redhat.com/solutions/1232413 for details about securing Apache , and https://access.redhat.com/solutions/1234773 for information about securing tftp . Extract the files from the ISO image of the installation DVD and place them in a directory that is shared over FTP, HTTP, or HTTPS. , make sure that the directory is shared via FTP, HTTP, or HTTPS, and verify client access. Test to see whether the directory is accessible from the server itself, and then from another machine on the same subnet to which you will be installing. 4.1.2. Preparing for an NFS Installation For NFS installation it is not necessary to extract all the files from the ISO image. It is sufficient to make the ISO image itself, the install.img file, and optionally the product.img file available on the network server via NFS. Transfer the ISO image to the NFS exported directory. On a Linux system, run: where path_to_image is the path to the ISO image file, name_of_image is the name of the ISO image file, and publicly_available_directory is a directory that is available over NFS or that you intend to make available over NFS. Use a SHA256 checksum program to verify that the ISO image that you copied is intact. Many SHA256 checksum programs are available for various operating systems. On a Linux system, run: where name_of_image is the name of the ISO image file. The SHA256 checksum program displays a string of 64 characters called a hash . Compare this hash to the hash displayed for this particular image on the Downloads page in the Red Hat Customer Portal (refer to Chapter 1, Obtaining Red Hat Enterprise Linux ). The two hashes should be identical. Copy the images/ directory from inside the ISO image to the same directory in which you stored the ISO image file itself. Enter the following commands: where path_to_image is the path to the ISO image file, name_of_image is the name of the ISO image file, and mount_point is a mount point on which to mount the image while you copy files from the image. For example: The ISO image file and an images/ directory are now present, side-by-side, in the same directory. Verify that the images/ directory contains at least the install.img file, without which installation cannot proceed. Optionally, the images/ directory should contain the product.img file, without which only the packages for a Minimal installation will be available during the package group selection stage (refer to Section 9.17, "Package Group Selection" ). Important install.img and product.img must be the only files in the images/ directory. Ensure that an entry for the publicly available directory exists in the /etc/exports file on the network server so that the directory is available via NFS. To export a directory read-only to a specific system, use: To export a directory read-only to all systems, use: On the network server, start the NFS daemon (on a Red Hat Enterprise Linux system, use /sbin/service nfs start ). If NFS is already running, reload the configuration file (on a Red Hat Enterprise Linux system use /sbin/service nfs reload ). Be sure to test the NFS share following the directions in the Red Hat Enterprise Linux Deployment Guide . Refer to your NFS documentation for details on starting and stopping the NFS server. Note anaconda has the ability to test the integrity of the installation media. It works with the DVD, hard drive ISO, and NFS ISO installation methods. We recommend that you test all installation media before starting the installation process, and before reporting any installation-related bugs (many of the bugs reported are actually due to improperly-burned DVDs). To use this test, type the following command at the boot: prompt: | [
"linux mediacheck",
"dd if=/dev/ dvd of=/ path_to_image / name_of_image .iso",
"mv / path_to_image / name_of_image .iso / publicly_available_directory /",
"sha256sum name_of_image .iso",
"mount -t iso9660 / path_to_image / name_of_image .iso / mount_point -o loop,ro cp -pr / mount_point /images / publicly_available_directory / umount / mount_point",
"mount -t iso9660 /var/isos/RHEL6.iso /mnt/tmp -o loop,ro cp -pr /mnt/tmp/images /var/isos/ umount /mnt/tmp",
"/publicly_available_directory client.ip.address (ro)",
"/publicly_available_directory * (ro)",
"linux mediacheck"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-Preparing-x86 |
Cluster APIs | Cluster APIs OpenShift Container Platform 4.18 Reference guide for cluster APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/cluster_apis/index |
7.2. Raising the Domain Level | 7.2. Raising the Domain Level Important This is a non-reversible operation. If you raise the domain level from 0 to 1 , you cannot downgrade from 1 to 0 again. Command Line: Raising the Domain Level Log in as the administrator: Run the ipa domainlevel-set command and provide the required level: Web UI: Raising the Domain Level Select IPA Server Topology Domain Level . Click Set Domain Level . | [
"kinit admin",
"ipa domainlevel-set 1 ----------------------- Current domain level: 1 -----------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/domain-level-set |
Security and Hardening Guide | Security and Hardening Guide Red Hat OpenStack Platform 17.0 Good Practices, Compliance, and Security Hardening OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/index |
Chapter 2. New features | Chapter 2. New features Cryostat 2.1 introduces new features that enhance your use of the Cryostat product. Automated rules user console (UI) Cryostat 2.1 includes a user console (UI) for the automated rules API. The UI includes the following key features: A form view that simplifies user interaction with the API by supporting typed inputs. A match expression wizard, where you can create custom match expressions that target specific applications. A JSON format view for the selected target application, so that you can view high-level application information. This information forms an integral part of the creation of match expressions. A color-coded response system that indicates whether your expression matches the selected target application. Note The match expression wizard includes an example of a custom match expression that you can reference if you need to familiarize yourself with the custom match expression syntax. Attachment of metadata and labels to JFR recordings When you create a JFR recording on Cryostat 2.1, you can add metadata with key-value label pairs to the recording. Additionally, you can attach custom labels to JFR recordings that are inside a target JVM, so that you can easily identify and better manage your JFR recordings. Use cases for metadata and labels include running queries or batch operations on recordings. You can navigate to the Recordings menu on the Cryostat web console and edit the label and its metadata of your JFR recording. You can also edit the label and metadata for a JFR recording that you uploaded to archives. Control of client-side notifications Cryostat 2.1 broadcasts notifications for all actions and state changes that can occur, resulting in a greater number of notifications appearing in the Cryostat web client. As a result, the Settings page has a control for client-side notifications. Cryostat 2.1 users can enable or disable notifications by category and bulk enable or disable all graphical notifications. The Cryostat backend still sends the notification messages, and the web client receives them. Disabling notifications prevents the messages from appearing on the console. Enabling the notifications again allows you to read any notifications. Custom target resource definition You can now create a custom target resource definition for your Cryostat instance. This allows the Cryostat Operator to connect to target applications by using a JMX protocol other than the Cryostat default protocol. The default protocol is typically JMX-RMI. The custom target resource definition consists of a YAML file, where you can specify any of the following attributes for the definition: alias , which sets an optional name for the resource definition. annotation.cryostat , which defines optional annotations for the definition. An automated rule can use these annotations to apply a rule to a target JVM. connectUrl , which specifies a target URL, such as a JMX service URL or a host:port pair, that Cryostat must use when it opens a JMX connection to a target JVM application. This attribute is mandatory. When you create a custom target object, the Cryostat Operator uses a RESTful API endpoint, POST /api/v2/targets , for the object. After the object is created, the object's connectUrl can be used as a targetId URL parameter in the REST HTTP API. You can use TargetDeleteHandler to remove a custom target resource definition from the Cryostat Operator. This handler reads a DELETE /api/v2/targets/:connectUrl endpoint request and attempts to remove the definition from the Cryostat Operator. Both TargetsPostHandler and TargetDeleteHandler include coded error messages that provide detailed error messages if a handler cannot handle a request. Environment variables for the Cryostat Operator Cryostat 2.1 includes the following environment variables that you can set to change the behavior of the Cryostat Operator: CRYOSTAT_REPORT_GENERATION_MAX_HEAP : Defaults to 200 MiB. Sets the maximum heap size that is used by the container subprocess to generate an automated rules analysis report. CRYOSTAT_MAX_WS_CONNECTIONS : Defaults to unlimited . Sets the maximum number of WebSocket client connections that your Cryostat application can support. CRYOSTAT_TARGET_CACHE_SIZE : Default to -1 , which indicates an unlimited caching. Sets the maximum number of JMX connections that the OpenShift Operator can cache to your Cryostat application. CRYOSTAT_TARGET_CACHE_TTL : Defaults to 10 , which indicates the amount of time in seconds that a JMX connection remains cached in your Cryostat instance's memory. JMC Agent plug-in support Cryostat 2.1 supports the JMC Agent Plugin by using a set of API handlers for managing probe templates. After you have installed a JMC Agent application and then built it to produce a JAR file, you can access the agent functionality for your Cryostat application by using the JMC Agent Plugin. This plug-in provides JMC Agent functionality to your Cryostat instance, such as adding JDK Flight Recorder (JFR) functionality to a running application. Red Hat OpenShift authentication for Cryostat 2.1 Cryostat 2.1 integrates the Red Hat OpenShift built-in OAuth server into its framework. When enabled, users can log in to Cryostat by using their Red Hat OpenShift username and password. This integrated capability provides a better mechanism than that offered in Cryostat 2.0, where you had to manually copy your Red Hat OpenShift authorization token from the Red Hat OpenShift web console and then paste the token's details in the Cryostat Application URL section on the console. Additionally, you can limit access to Cryostat features by using role-based access control (RBAC) roles that were assigned in Red Hat OpenShift. The Cryostat 2.1 release includes the following keys for the GET /health response object: DATASOURCE_CONFIGURED DASHBOARD_CONFIGURED REPORTS_CONFIGURED REPORTS_AVAILABLE Red Hat OpenShift credentials After you log in to the Cryostat 2.1 web console from the Red Hat OpenShift web console, the Cryostat Operator temporarily stores your username and password credentials on your Red Hat OpenShift account for the duration of the session. This prevents your Cryostat web console session from ending before you log out of the Cryostat web console. Manage Java Management Extensions credentials in Cryostat 2.1 You can store and manage the Java Management Extension (JMX) credentials that are used to authenticate to containerized Java Virtual Machines (JVM). This feature is useful when you want Cryostat to remember and reuse your credentials for multiple JVMs. When you add JMX credentials to Cryostat, you cannot view the credentials anymore. This keeps your credentials secure, because your credentials do not remain visible after you enter them on the Cryostat web console. If you want to replace the credentials, you must delete the credentials and add them again. New automated rules environment variables Before Cryostat 2.1, JMX connections for automated rules would close if these JMX connections were previously cached to your Cryostat instance's memory. This issue occurred because of the behavior of the CRYOSTAT_TARGET_CACHE_MAX_CONNECTIONS environment variable. The JMX cache component for Cryostat 2.1 now uses the CRYOSTAT_TARGET_CACHE_SIZE environment variable instead of the CRYOSTAT_TARGET_CACHE_MAX_CONNECTIONS environment variable. This means that any JMX connections that you open do not get automatically cached by automated rules to your Cryostat instance's memory. This prevents automated rules from overfilling the cache storage space and causing in-use JMX connections to close, which can lead to degradation of performance with increased latency and response times. The CRYOSTAT_TARGET_CACHE_SIZE environment variable specifies the maximum number of JMX connections to cache in your Cryostat instance's memory. You can specify the following values for this environment variable: < 0: default value is -1 . Values less than 0 indicate an unlimited cache size. This means that JMX connections only get removed from memory when they reach an inactivity limit. 0: A value of zero indicates JMX connections are immediately removed from memory when these connections are closed. > 0: Values greater than 0 indicates that a Cryostat instance can cache a set number of JMX connections in its memory. If a new connection is created when the cache amount has reached its level, the oldest JMX connection is closed and removed from memory to facilitate storage of the new connection. Automated rules can re-use any previously cached JMX connections. If no JMX connection exists then the Cryostat Operator creates a new JMX connection for the automated rule. This connection does not get cached to memory. Resource requirements By default, the Cryostat Operator deploys your Cryostat application without specifying any resource requests or limits for each of the three containers that operate in your Cryostat instance's main pod on Red Hat OpenShift. Cryostat 2.1 includes a capability where you can use a Cryostat custom resource (CR) to specify resource requests or limits for each of the following three containers: core , which runs the Cryostat backend service and the web application. datasource , which runs the JFR data source that converts your JFR recordings into a file format that is supported by Grafana. grafana , which runs the Grafana instance that is associated with your Cryostat application. Sidecar report container With Cryostat 2.1, you can use the sidecar report container to generate automated analysis reports for JDK flight recordings (JFR). Before Cryostat 2.1, you had to rely on the main Cryostat container to generate analysis reports. This approach is resource intensive and could impact the performance of running your Cryostat application because you might need to provision additional resources for the main Cryostat container. By generating analysis reports in the sidecar report container, you can efficiently use the Cryostat Operator to provision resources for your Cryostat application. This provides your Cryostat container with a lower resource footprint, because the Cryostat Operator that interacts with the container can focus on running low-overhead operations over HTTP and JMC connections. Additionally, you can duplicate a sidecar report container and then configure this duplicate to meet your needs. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.1/cryostat-new-features-2-1_cryostat |
Chapter 2. Installation | Chapter 2. Installation This chapter describes in detail how to get access to the content set, install Red Hat Software Collections 3.1 on the system, and rebuild Red Hat Software Collections. 2.1. Getting Access to Red Hat Software Collections The Red Hat Software Collections content set is available to customers with Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 subscriptions listed at https://access.redhat.com/solutions/472793 . For information on how to register your system with Red Hat Subscription Management (RHSM), see Using and Configuring Red Hat Subscription Manager . For detailed instructions on how to enable Red Hat Software Collections using RHSM, see Section 2.1.1, "Using Red Hat Subscription Management" . Since Red Hat Software Collections 2.2, the Red Hat Software Collections and Red Hat Developer Toolset content is available also in the ISO format at https://access.redhat.com/downloads , specifically for Server and Workstation . Note that packages that require the Optional channel, which are listed in Section 2.1.2, "Packages from the Optional Channel" , cannot be installed from the ISO image. Note Packages that require the Optional channel cannot be installed from the ISO image. A list of packages that require enabling of the Optional channel is provided in Section 2.1.2, "Packages from the Optional Channel" . Beta content is unavailable in the ISO format. 2.1.1. Using Red Hat Subscription Management If your system is registered with Red Hat Subscription Management, complete the following steps to attach the subscription that provides access to the repository for Red Hat Software Collections and enable the repository: Display a list of all subscriptions that are available for your system and determine the pool ID of a subscription that provides Red Hat Software Collections. To do so, type the following at a shell prompt as root : subscription-manager list --available For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to it. The pool ID is listed on a line beginning with Pool Id . Attach the appropriate subscription to your system by running the following command as root : subscription-manager attach --pool= pool_id Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, type as root : subscription-manager list --consumed Display the list of available Yum list repositories to retrieve repository metadata and determine the exact name of the Red Hat Software Collections repositories. As root , type: subscription-manager repos --list Or alternatively, run yum repolist all for a brief list. The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Replace variant with the Red Hat Enterprise Linux system variant, that is, server or workstation . Note that Red Hat Software Collections is supported neither on the Client nor on the ComputeNode variant. Enable the appropriate repository by running the following command as root : subscription-manager repos --enable repository Once the subscription is attached to the system, you can install Red Hat Software Collections as described in Section 2.2, "Installing Red Hat Software Collections" . For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see Using and Configuring Red Hat Subscription Manager . Note Subscription through RHN is no longer available. 2.1.2. Packages from the Optional Channel Some of the Red Hat Software Collections 3.1 packages require the Optional channel to be enabled in order to complete the full installation of these packages. For detailed instructions on how to subscribe your system to this channel, see the relevant Knowledgebase articles at https://access.redhat.com/solutions/392003 for Red Hat Subscription Management or at https://access.redhat.com/solutions/70019 if your system is registered with RHN Classic. Packages from Software Collections for Red Hat Enterprise Linux 6 that require the Optional channel to be enabled are listed in the following table. Table 2.1. Packages That Require Enabling of the Optional Channel in Red Hat Enterprise Linux 6 Package from a Software Collection Required Package from the Optional Channel devtoolset-6-dyninst-testsuite glibc-static devtoolset-7-dyninst-testsuite glibc-static rh-git29-git-all cvsps, perl-Net-SMTP-SSL rh-git29-git-cvs cvsps rh-git29-git-email perl-Net-SMTP-SSL rh-git29-perl-Git-SVN perl-YAML, subversion-perl rh-mariadb101-boost-devel libicu-devel rh-mariadb101-boost-examples libicu-devel rh-mariadb101-boost-static libicu-devel rh-mongodb30upg-boost-devel libicu-devel rh-mongodb30upg-boost-examples libicu-devel rh-mongodb30upg-boost-static libicu-devel rh-mongodb30upg-yaml-cpp-devel libicu-devel rh-mongodb32-boost-devel libicu-devel rh-mongodb32-boost-examples libicu-devel rh-mongodb32-boost-static libicu-devel rh-mongodb32-yaml-cpp-devel libicu-devel rh-mongodb34-boost-devel libicu-devel rh-mongodb34-boost-examples libicu-devel rh-mongodb34-boost-static libicu-devel rh-mongodb34-yaml-cpp-devel libicu-devel rh-php56-php-imap libc-client rh-php56-php-recode recode rh-php70-php-imap libc-client rh-php70-php-recode recode Software Collections packages that require the Optional channel in Red Hat Enterprise Linux 7 are listed in the table below. Table 2.2. Packages That Require Enabling of the Optional Channel in Red Hat Enterprise Linux 7 Package from a Software Collection Required Package from the Optional Channel devtoolset-7-dyninst-testsuite glibc-static devtoolset-7-gcc-plugin-devel libmpc-devel httpd24-mod_ldap apr-util-ldap rh-eclipse46 ruby-doc rh-eclipse46-eclipse-dltk-ruby ruby-doc rh-eclipse46-eclipse-dltk-sdk ruby-doc rh-eclipse46-eclipse-dltk-tests ruby-doc rh-git29-git-all cvsps rh-git29-git-cvs cvsps rh-git29-perl-Git-SVN subversion-perl rh-perl520-perl-Pod-Perldoc groff Note that packages from the Optional channel are not supported. For details, see the Knowledgebase article at https://access.redhat.com/articles/1150793 . 2.2. Installing Red Hat Software Collections Red Hat Software Collections is distributed as a collection of RPM packages that can be installed, updated, and uninstalled by using the standard package management tools included in Red Hat Enterprise Linux. Note that a valid subscription is required to install Red Hat Software Collections on your system. For detailed instructions on how to associate your system with an appropriate subscription and get access to Red Hat Software Collections, see Section 2.1, "Getting Access to Red Hat Software Collections" . Use of Red Hat Software Collections 3.1 requires the removal of any earlier pre-release versions, including Beta releases. If you have installed any version of Red Hat Software Collections 3.1, uninstall it from your system and install the new version as described in the Section 2.3, "Uninstalling Red Hat Software Collections" and Section 2.2.1, "Installing Individual Software Collections" sections. The in-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 is not supported by Red Hat Software Collections. As a consequence, the installed Software Collections might not work correctly after the upgrade. If you want to upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7, it is strongly recommended to remove all Red Hat Software Collections packages, perform the in-place upgrade, update the Red Hat Software Collections repository, and install the Software Collections packages again. It is advisable to back up all data before upgrading. 2.2.1. Installing Individual Software Collections To install any of the Software Collections that are listed in Table 1.1, "Red Hat Software Collections 3.1 Components" , install the corresponding meta package by typing the following at a shell prompt as root : yum install software_collection ... Replace software_collection with a space-separated list of Software Collections you want to install. For example, to install php54 and rh-mariadb100 , type as root : This installs the main meta package for the selected Software Collection and a set of required packages as its dependencies. For information on how to install additional packages such as additional modules, see Section 2.2.2, "Installing Optional Packages" . 2.2.2. Installing Optional Packages Each component of Red Hat Software Collections is distributed with a number of optional packages that are not installed by default. To list all packages that are part of a certain Software Collection but are not installed on your system, type the following at a shell prompt: yum list available software_collection -\* To install any of these optional packages, type as root : yum install package_name ... Replace package_name with a space-separated list of packages that you want to install. For example, to install the rh-perl520-perl-CPAN and rh-perl520-perl-Archive-Tar , type: 2.2.3. Installing Debugging Information To install debugging information for any of the Red Hat Software Collections packages, make sure that the yum-utils package is installed and type the following command as root : debuginfo-install package_name For example, to install debugging information for the rh-ruby22-ruby package, type: Note that you need to have access to the repository with these packages. If your system is registered with Red Hat Subscription Management, enable the rhel- variant -rhscl-6-debug-rpms or rhel- variant -rhscl-7-debug-rpms repository as described in Section 2.1.1, "Using Red Hat Subscription Management" . For more information on how to get access to debuginfo packages, see https://access.redhat.com/solutions/9907 . 2.3. Uninstalling Red Hat Software Collections To uninstall any of the Software Collections components, type the following at a shell prompt as root : yum remove software_collection \* Replace software_collection with the Software Collection component you want to uninstall. Note that uninstallation of the packages provided by Red Hat Software Collections does not affect the Red Hat Enterprise Linux system versions of these tools. 2.4. Rebuilding Red Hat Software Collections <collection>-build packages are not provided by default. If you wish to rebuild a collection and do not want or cannot use the rpmbuild --define 'scl foo' command, you first need to rebuild the metapackage, which provides the <collection>-build package. Note that existing collections should not be rebuilt with different content. To add new packages into an existing collection, you need to create a new collection containing the new packages and make it dependent on packages from the original collection. The original collection has to be used without changes. For detailed information on building Software Collections, refer to the Red Hat Software Collections Packaging Guide . | [
"rhel- variant -rhscl-6-rpms rhel- variant -rhscl-6-debug-rpms rhel- variant -rhscl-6-source-rpms rhel-server-rhscl-6-eus-rpms rhel-server-rhscl-6-eus-source-rpms rhel-server-rhscl-6-eus-debug-rpms rhel- variant -rhscl-7-rpms rhel- variant -rhscl-7-debug-rpms rhel- variant -rhscl-7-source-rpms rhel-server-rhscl-7-eus-rpms rhel-server-rhscl-7-eus-source-rpms rhel-server-rhscl-7-eus-debug-rpms",
"~]# yum install rh-php56 rh-mariadb100",
"~]# yum install rh-perl524-perl-CPAN rh-perl524-perl-Archive-Tar",
"~]# debuginfo-install rh-ruby22-ruby"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.1_release_notes/chap-Installation |
Appendix B. GFS2 Tracepoints and the debugfs glocks File | Appendix B. GFS2 Tracepoints and the debugfs glocks File This appendix describes both the glock debugfs interface and the GFS2 tracepoints. It is intended for advanced users who are familiar with file system internals who would like to learn more about the design of GFS2 and how to debug GFS2-specific issues. B.1. GFS2 Tracepoint Types There are currently three types of GFS2 tracepoints: glock (pronounced "gee-lock") tracepoints, bmap tracepoints and log tracepoints. These can be used to monitor a running GFS2 file system and give additional information to that which can be obtained with the debugging options supported in releases of Red Hat Enterprise Linux. Tracepoints are particularly useful when a problem, such as a hang or performance issue, is reproducible and thus the tracepoint output can be obtained during the problematic operation. In GFS2, glocks are the primary cache control mechanism and they are the key to understanding the performance of the core of GFS2. The bmap (block map) tracepoints can be used to monitor block allocations and block mapping (lookup of already allocated blocks in the on-disk metadata tree) as they happen and check for any issues relating to locality of access. The log tracepoints keep track of the data being written to and released from the journal and can provide useful information on that part of GFS2. The tracepoints are designed to be as generic as possible. This should mean that it will not be necessary to change the API during the course of Red Hat Enterprise Linux 7. On the other hand, users of this interface should be aware that this is a debugging interface and not part of the normal Red Hat Enterprise Linux 7 API set, and as such Red Hat makes no guarantees that changes in the GFS2 tracepoints interface will not occur. Tracepoints are a generic feature of Red Hat Enterprise Linux 7 and their scope goes well beyond GFS2. In particular they are used to implement the blktrace infrastructure and the blktrace tracepoints can be used in combination with those of GFS2 to gain a fuller picture of the system performance. Due to the level at which the tracepoints operate, they can produce large volumes of data in a very short period of time. They are designed to put a minimum load on the system when they are enabled, but it is inevitable that they will have some effect. Filtering events by a variety of means can help reduce the volume of data and help focus on obtaining just the information which is useful for understanding any particular situation. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/gfs2_tracepoints |
Chapter 6. Installing a cluster on AWS in a restricted network | Chapter 6. Installing a cluster on AWS in a restricted network In OpenShift Container Platform version 4.15, you can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing Amazon Virtual Private Cloud (VPC). 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 6.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 6.3. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 6.3.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 6.3.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 6.3.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 6.3.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 6.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 6.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the subnets for the VPC to install the cluster in: subnets: - subnet-1 - subnet-2 - subnet-3 Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 6.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.6.2. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 12 14 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 23 Provide the contents of the certificate file that you used for your mirror registry. 24 Provide the imageContentSources section from the output of the command to mirror the repository. 6.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 6.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.8.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 6.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 6.1. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 6.2. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.8.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 6.8.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 6.8.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 6.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.11. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 6.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.13. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"subnets: - subnet-1 - subnet-2 - subnet-3",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_aws/installing-restricted-networks-aws-installer-provisioned |
Chapter 8. Upgrading the Migration Toolkit for Containers | Chapter 8. Upgrading the Migration Toolkit for Containers You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.13 by using Operator Lifecycle Manager. You can upgrade MTC on OpenShift Container Platform 3 by reinstalling the legacy Migration Toolkit for Containers Operator. Important If you are upgrading from MTC version 1.3, you must perform an additional procedure to update the MigPlan custom resource (CR). 8.1. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 4.13 You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.13 by using the Operator Lifecycle Manager. Important When upgrading the MTC by using the Operator Lifecycle Manager, you must use a supported migration path. Migration paths Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Migrating from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. MTC 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x may be used. However, it must be the same MTC version on both source & destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform console, navigate to Operators Installed Operators . Operators that have a pending upgrade display an Upgrade available status. Click Migration Toolkit for Containers Operator . Click the Subscription tab. Any upgrades requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for upgrade and click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date . Click Workloads Pods to verify that the MTC pods are running. 8.2. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 3 You can upgrade Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 by manually installing the legacy Migration Toolkit for Containers Operator. Prerequisites You must be logged in as a user with cluster-admin privileges. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials by entering the following command: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./ Replace the Migration Toolkit for Containers Operator by entering the following command: USD oc replace --force -f operator.yml Scale the migration-operator deployment to 0 to stop the deployment by entering the following command: USD oc scale -n openshift-migration --replicas=0 deployment/migration-operator Scale the migration-operator deployment to 1 to start the deployment and apply the changes by entering the following command: USD oc scale -n openshift-migration --replicas=1 deployment/migration-operator Verify that the migration-operator was upgraded by entering the following command: USD oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F ":" '{ print USDNF }' Download the controller.yml file by entering the following command: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Create the migration-controller object by entering the following command: USD oc create -f controller.yml If you have previously added the OpenShift Container Platform 3 cluster to the MTC web console, you must update the service account token in the web console because the upgrade process deletes and restores the openshift-migration namespace: Obtain the service account token by entering the following command: USD oc sa get-token migration-controller -n openshift-migration In the MTC web console, click Clusters . Click the Options menu to the cluster and select Edit . Enter the new service account token in the Service account token field. Click Update cluster and then click Close . Verify that the MTC pods are running by entering the following command: USD oc get pods -n openshift-migration 8.3. Upgrading MTC 1.3 to 1.8 If you are upgrading Migration Toolkit for Containers (MTC) version 1.3.x to 1.8, you must update the MigPlan custom resource (CR) manifest on the cluster on which the MigrationController pod is running. Because the indirectImageMigration and indirectVolumeMigration parameters do not exist in MTC 1.3, their default value in version 1.4 is false , which means that direct image migration and direct volume migration are enabled. Because the direct migration requirements are not fulfilled, the migration plan cannot reach a Ready state unless these parameter values are changed to true . Important Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Upgrading MTC 1.7.x to 1.8.x requires manually updating the OADP channel from stable-1.0 to stable-1.2 in order to successfully complete the upgrade from 1.7.x to 1.8.x. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Log in to the cluster on which the MigrationController pod is running. Get the MigPlan CR manifest: USD oc get migplan <migplan> -o yaml -n openshift-migration Update the following parameter values and save the file as migplan.yaml : ... spec: indirectImageMigration: true indirectVolumeMigration: true Replace the MigPlan CR manifest to apply the changes: USD oc replace -f migplan.yaml -n openshift-migration Get the updated MigPlan CR manifest to verify the changes: USD oc get migplan <migplan> -o yaml -n openshift-migration | [
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./",
"oc replace --force -f operator.yml",
"oc scale -n openshift-migration --replicas=0 deployment/migration-operator",
"oc scale -n openshift-migration --replicas=1 deployment/migration-operator",
"oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"oc create -f controller.yml",
"oc sa get-token migration-controller -n openshift-migration",
"oc get pods -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"spec: indirectImageMigration: true indirectVolumeMigration: true",
"oc replace -f migplan.yaml -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migrating_from_version_3_to_4/upgrading-3-4 |
Chapter 20. Solving common replication problems | Chapter 20. Solving common replication problems Multi-supplier replication uses an eventually-consistency replication model. This means that the same entries can be changed on different servers. When replication occurs between these two servers, Directory Server needs to resolve the conflicting changes. Mostly, resolution occurs automatically, based on the timestamp associated with the change on each server. The most recent change has priority. However, there are some cases where conflicts require manual intervention in order to reach a resolution. 20.1. Identifying and solving naming conflicts When several supplier servers receive a request to create an entry with the same distinguished name (DN), each server creates the entry with this DN and a different entry unique identifier (entry ID). The entry ID is stored in the nsuniqueid operational attribute. For example, Server A and Server B receive a request to create uid= user_name ,ou=people,dc=example,dc=com user entry. As a result, each server has its own entry: On Server A, the entry has: uid= user_name ,ou=people,dc=example,dc=com nsuniqueid=a7f1758b-512211ec-b115e2e9-7dc2d46b On Server B, the entry has: uid= user_name ,ou=people,dc=example,dc=com nsuniqueid=643a461e-b61311e1-b23be826-4afeed5f During replication, Server A replicates newly created entry uid= user_name ,ou=people,dc=example,dc=com to Server B , and Server B replicates newly created entry to Server A , and a naming conflict occurs on each server. By comparing change sequence numbers (CSN), each server determines which entry was created earlier. For example, the entry on Server B was created earlier. The automatic conflict resolution procedure changes the last entry created (the entry on Server A ) the following way: Adds the nsuniqueid value to the non-unique DN. Adds the nsds5replconflict attribute with the description which operation caused the conflict. Adds the ldapsubentry objectclass. Now the following entries exist on both servers: The valid entry with: uid=user_name,ou=people,dc=example,dc=com nsuniqueid=643a461e-b61311e1-b23be826-4afeed5f The conflict entry with: nsuniqueid=a7f1758b-512211ec-b115e2e9-7dc2d46b+uid=user_name,ou=people,dc=example,dc=com nsuniqueid=a7f1758b-512211ec-b115e2e9-7dc2d46b To solve the naming conflict manually, use the following procedure on each server. Procedure List the conflict entries: If conflict entries exist, decide how to proceed: To keep only the valid entry ( uid=user_name,ou=people,dc=example,dc=com ) and delete the conflict entry, enter: To keep only the conflict entry ( nsuniqueid=a7f1758b-512211ec-b115e2e9-7dc2d46b+uid=user_name,ou=People,dc=example,dc=com ) and delete the valid entry, enter: To keep both entries, specify a new relative distinguished name (RDN) to rename the conflict entry: This command renames the conflict entry to uid=user_name_NEW,ou=people,dc=example,dc=com . Warning Directory Server replicates LDAP operations performed on a conflict entry. Usually replicated operations target the entry by using the nsuniqueid of the original operation entry rather than by using the operation dn . However, in cases with conflict entries, the behavior might differ. 20.2. Identifying and solving orphan entry conflicts When Directory Server replicates a delete operation and the consumer server finds that the entry to be deleted has child entries, the conflict resolution procedure creates a glue entry to avoid having orphaned entries in the directory. In the same way, when Directory Server replicates an add operation and the consumer server cannot find the parent entry, the conflict resolution procedure creates a glue entry for the parent. Glue entries are temporary entries that include the object classes glue and extensibleObject . Glue entries can be created in several ways: If the conflict resolution procedure finds a deleted entry with a matching unique identifier, the glue entry has the same attributes as the deleted entry, but with the added glue object class and the nsds5ReplConflict attribute. In such cases, either modify the glue entry to remove the glue object class and the nsds5ReplConflict attribute to keep the entry as a normal entry or delete the glue entry and its child entries. The server creates an entry with the glue and extensibleObject object classes. Procedure List the orphan entry conflicts: # dsconf -D " cn=Directory Manager " ldap://server.example.com repl-conflict list-glue suffix dn: ou=parent,dc=example,dc=com objectClass: top objectClass: organizationalunit objectClass: glue objectClass: extensibleobject ou: parent If orphan entry conflicts exist, decide how to proceed: To delete a glue entry and its child entries, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com repl-conflict delete-glue " ou=parent,dc=example,dc=com " dn: ou=parent,dc=example,dc=com objectClass: top objectClass: organizationalunit objectClass: extensibleobject ou: parent To convert a glue entry into a regular entry, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com repl-conflict convert-glue " ou=parent,dc=example,dc=com " 20.3. Identifying and solving errors about obsolete or missing suppliers Directory Server stores information about the replication topology, such as all suppliers that send updates to other replicas, in a set of metadata called replica update vector (RUV). An RUV contains information about the supplier, such as its ID and URL, the last change state number (CSN) on the local server, and the CSN of the first change. Both suppliers and consumers store RUV information, and they use it to control replication updates. When you remove a supplier from the replication topology, information about it can remain in another replica's RUV. You can use a cleanallruv task to remove the RUV entry form all suppliers in the topology. Prerequisites Replication is enabled on. Procedure Monitor the /var/log/dirsrv/slapd- instance_name /errors log file and search for entries similar to the following: [22/Jan/2021:17:16:01 -0500] NSMMReplicationPlugin - ruv_compare_ruv: RUV [changelog max RUV] does not contain element [{replica 8 ldap://server2.example.com:389} 4aac3e59000000080000 4c6f2a02000000080000] which is present in RUV [database RUV] ... [22/Jan/2021:17:16:01 -0500] NSMMReplicationPlugin - replica_check_for_data_reload: Warning: for replica dc=example,dc=com there were some differences between the changelog max RUV and the database RUV. If there are obsolete elements in the database RUV, you should remove them using the CLEANALLRUV task. If they are not obsolete, you should check their status to see why there are no changes from those servers in the changelog. In this case, the replica ID 8 causes this error. Display all RUV records and replica IDs, both valid and invalid: # dsconf -D " cn=Directory Manager " ldap://server1.example.com replication get-ruv --suffix " dc=example,dc=com " RUV: {replica 1 ldap://server1.example.com} 61a4d8f8000100010000 61a4f5b8000000010000 Replica ID: 1 LDAP URL: ldap://server1.example.com Min CSN: 2021-11-29 13:43:20 1 0 (61a4d8f8000100010000) Max CSN: 2021-11-29 15:46:00 (61a4f5b8000000010000) RUV: {replica 2 ldap://server2.example.com} 61a4d8fb000100020000 61a4f550000000020000 Replica ID: 2 LDAP URL: ldap://server2.example.com Min CSN: 2021-11-29 13:43:23 1 0 (61a4d8fb000100020000) Max CSN: 2021-11-29 15:44:16 (61a4f550000000020000) RUV: {replica 8 ldap://server3.example.com} 61a4d903000100080000 61a4d908000000080000 Replica ID: 8 LDAP URL: ldap://server3.example.com Min CSN: 2021-11-29 13:43:31 1 0 (61a4d903000100080000) Max CSN: 2021-11-29 13:43:36 (61a4d908000000080000) Note the list of returned replica IDs: 1 , 2 , and 8 . Run cleanup tasks for the replica IDs 8 . # dsconf -D " cn=Directory Manager " ldap://server1.example.com repl-tasks cleanallruv --suffix=" dc=example,dc=com " --replica-id= 8 Note that Directory Server replicates RUV cleanup tasks. Therefore, you need to start the tasks on only one supplier. If one of the replicas can not be joined, for example if it is down, you can use the --force-cleaning option to achieve an immediate clean up of the RUV. Verification Display the RUV records and replica IDs: # dsconf -D " cn=Directory Manager " ldap://server1.example.com replication get-ruv --suffix " dc=example,dc=com " RUV: {replica 1 ldap://server1.example.com} 61a4d8f8000100010000 61a4f5b8000000010000 Replica ID: 1 LDAP URL: ldap://server1.example.com Min CSN: 2021-11-29 14:02:10 1 0 (61a4d8f8000100010000) Max CSN: 2021-11-29 16:00:00 (61a4f5b8000000010000) RUV: {replica 2 ldap://server2.example.com} 61a4d8fb000100020000 61a4f550000000020000 Replica ID: 2 LDAP URL: ldap://server2.example.com Min CSN: 2021-11-29 14:02:10 1 0 (61a4d8fb000100020000) Max CSN: 2021-11-29 15:58:22 (61a4f550000000020000) The command no longer returns RUV entries for the replica IDs 8 . | [
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com repl-conflict list dc=example,dc=com dn: nsuniqueid=a7f1758b-512211ec-b115e2e9-7dc2d46b+uid=user_name,ou=people,dc=example,dc=com cn: user_name displayName: user gidNumber: 99998 homeDirectory: /var/empty legalName: user name loginShell: /bin/false nsds5replconflict: namingConflict (ADD) uid=user_name,ou=people,dc=example,dc=com objectClass: top objectClass: nsPerson objectClass: nsAccount objectClass: nsOrgPerson objectClass: posixAccount objectClass: ldapsubentry uid: user_name uidNumber: 99998",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com repl-conflict delete nsuniqueid=a7f1758b-512211ec-b115e2e9-7dc2d46b+uid=user_name,ou=People,dc=example,dc=com",
"dsconf -D \" cn=Directory Manager \" ldap:// server.example.com repl-conflict swap nsuniqueid=a7f1758b-512211ec-b115e2e9-7dc2d46b+uid=user_name,ou=People,dc=example,dc=com",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com repl-conflict convert --new-rdn= uid=user_name_NEW nsuniqueid=a7f1758b-512211ec-b115e2e9-7dc2d46b+uid=user_name,ou=people,dc=example,dc=com",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com repl-conflict list-glue suffix dn: ou=parent,dc=example,dc=com objectClass: top objectClass: organizationalunit objectClass: glue objectClass: extensibleobject ou: parent",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com repl-conflict delete-glue \" ou=parent,dc=example,dc=com \" dn: ou=parent,dc=example,dc=com objectClass: top objectClass: organizationalunit objectClass: extensibleobject ou: parent",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com repl-conflict convert-glue \" ou=parent,dc=example,dc=com \"",
"[22/Jan/2021:17:16:01 -0500] NSMMReplicationPlugin - ruv_compare_ruv: RUV [changelog max RUV] does not contain element [{replica 8 ldap://server2.example.com:389} 4aac3e59000000080000 4c6f2a02000000080000] which is present in RUV [database RUV] [22/Jan/2021:17:16:01 -0500] NSMMReplicationPlugin - replica_check_for_data_reload: Warning: for replica dc=example,dc=com there were some differences between the changelog max RUV and the database RUV. If there are obsolete elements in the database RUV, you should remove them using the CLEANALLRUV task. If they are not obsolete, you should check their status to see why there are no changes from those servers in the changelog.",
"dsconf -D \" cn=Directory Manager \" ldap://server1.example.com replication get-ruv --suffix \" dc=example,dc=com \" RUV: {replica 1 ldap://server1.example.com} 61a4d8f8000100010000 61a4f5b8000000010000 Replica ID: 1 LDAP URL: ldap://server1.example.com Min CSN: 2021-11-29 13:43:20 1 0 (61a4d8f8000100010000) Max CSN: 2021-11-29 15:46:00 (61a4f5b8000000010000) RUV: {replica 2 ldap://server2.example.com} 61a4d8fb000100020000 61a4f550000000020000 Replica ID: 2 LDAP URL: ldap://server2.example.com Min CSN: 2021-11-29 13:43:23 1 0 (61a4d8fb000100020000) Max CSN: 2021-11-29 15:44:16 (61a4f550000000020000) RUV: {replica 8 ldap://server3.example.com} 61a4d903000100080000 61a4d908000000080000 Replica ID: 8 LDAP URL: ldap://server3.example.com Min CSN: 2021-11-29 13:43:31 1 0 (61a4d903000100080000) Max CSN: 2021-11-29 13:43:36 (61a4d908000000080000)",
"dsconf -D \" cn=Directory Manager \" ldap://server1.example.com repl-tasks cleanallruv --suffix=\" dc=example,dc=com \" --replica-id= 8",
"dsconf -D \" cn=Directory Manager \" ldap://server1.example.com replication get-ruv --suffix \" dc=example,dc=com \" RUV: {replica 1 ldap://server1.example.com} 61a4d8f8000100010000 61a4f5b8000000010000 Replica ID: 1 LDAP URL: ldap://server1.example.com Min CSN: 2021-11-29 14:02:10 1 0 (61a4d8f8000100010000) Max CSN: 2021-11-29 16:00:00 (61a4f5b8000000010000) RUV: {replica 2 ldap://server2.example.com} 61a4d8fb000100020000 61a4f550000000020000 Replica ID: 2 LDAP URL: ldap://server2.example.com Min CSN: 2021-11-29 14:02:10 1 0 (61a4d8fb000100020000) Max CSN: 2021-11-29 15:58:22 (61a4f550000000020000)"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_solving-common-replication-problems_configuring-and-managing-replication |
Part III. Appendices | Part III. Appendices This part describes common problems and solutions for virtualization issues, provides instruction on how to use KVM virtualization on multiple architectures, and how to work with IOMMU Groups. This part also includes information on additional support and product restrictions of the virtualization packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/part-appendices |
Chapter 19. OAuth [config.openshift.io/v1] | Chapter 19. OAuth [config.openshift.io/v1] Description OAuth holds cluster-wide information about OAuth. The canonical name is cluster . It is used to configure the integrated OAuth server. This configuration is only honored when the top level Authentication config has type set to IntegratedOAuth. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 19.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description identityProviders array identityProviders is an ordered list of ways for a user to identify themselves. When this list is empty, no identities are provisioned for users. identityProviders[] object IdentityProvider provides identities for users authenticating using credentials templates object templates allow you to customize pages like the login page. tokenConfig object tokenConfig contains options for authorization and access tokens 19.1.2. .spec.identityProviders Description identityProviders is an ordered list of ways for a user to identify themselves. When this list is empty, no identities are provisioned for users. Type array 19.1.3. .spec.identityProviders[] Description IdentityProvider provides identities for users authenticating using credentials Type object Property Type Description basicAuth object basicAuth contains configuration options for the BasicAuth IdP github object github enables user authentication using GitHub credentials gitlab object gitlab enables user authentication using GitLab credentials google object google enables user authentication using Google credentials htpasswd object htpasswd enables user authentication using an HTPasswd file to validate credentials keystone object keystone enables user authentication using keystone password credentials ldap object ldap enables user authentication using LDAP credentials mappingMethod string mappingMethod determines how identities from this provider are mapped to users Defaults to "claim" name string name is used to qualify the identities returned by this provider. - It MUST be unique and not shared by any other identity provider used - It MUST be a valid path segment: name cannot equal "." or ".." or contain "/" or "%" or ":" Ref: https://godoc.org/github.com/openshift/origin/pkg/user/apis/user/validation#ValidateIdentityProviderName openID object openID enables user authentication using OpenID credentials requestHeader object requestHeader enables user authentication using request header credentials type string type identifies the identity provider type for this entry. 19.1.4. .spec.identityProviders[].basicAuth Description basicAuth contains configuration options for the BasicAuth IdP Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. tlsClientCert object tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. tlsClientKey object tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. url string url is the remote URL to connect to 19.1.5. .spec.identityProviders[].basicAuth.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.6. .spec.identityProviders[].basicAuth.tlsClientCert Description tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.7. .spec.identityProviders[].basicAuth.tlsClientKey Description tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.8. .spec.identityProviders[].github Description github enables user authentication using GitHub credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. This can only be configured when hostname is set to a non-empty value. The namespace for this config map is openshift-config. clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. hostname string hostname is the optional domain (e.g. "mycompany.com") for use with a hosted instance of GitHub Enterprise. It must match the GitHub Enterprise settings value configured at /setup/settings#hostname. organizations array (string) organizations optionally restricts which organizations are allowed to log in teams array (string) teams optionally restricts which teams are allowed to log in. Format is <org>/<team>. 19.1.9. .spec.identityProviders[].github.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. This can only be configured when hostname is set to a non-empty value. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.10. .spec.identityProviders[].github.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.11. .spec.identityProviders[].gitlab Description gitlab enables user authentication using GitLab credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. url string url is the oauth server base URL 19.1.12. .spec.identityProviders[].gitlab.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.13. .spec.identityProviders[].gitlab.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.14. .spec.identityProviders[].google Description google enables user authentication using Google credentials Type object Property Type Description clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. hostedDomain string hostedDomain is the optional Google App domain (e.g. "mycompany.com") to restrict logins to 19.1.15. .spec.identityProviders[].google.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.16. .spec.identityProviders[].htpasswd Description htpasswd enables user authentication using an HTPasswd file to validate credentials Type object Property Type Description fileData object fileData is a required reference to a secret by name containing the data to use as the htpasswd file. The key "htpasswd" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. If the specified htpasswd data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. 19.1.17. .spec.identityProviders[].htpasswd.fileData Description fileData is a required reference to a secret by name containing the data to use as the htpasswd file. The key "htpasswd" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. If the specified htpasswd data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.18. .spec.identityProviders[].keystone Description keystone enables user authentication using keystone password credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. domainName string domainName is required for keystone v3 tlsClientCert object tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. tlsClientKey object tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. url string url is the remote URL to connect to 19.1.19. .spec.identityProviders[].keystone.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.20. .spec.identityProviders[].keystone.tlsClientCert Description tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.21. .spec.identityProviders[].keystone.tlsClientKey Description tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.22. .spec.identityProviders[].ldap Description ldap enables user authentication using LDAP credentials Type object Property Type Description attributes object attributes maps LDAP attributes to identities bindDN string bindDN is an optional DN to bind with during the search phase. bindPassword object bindPassword is an optional reference to a secret by name containing a password to bind with during the search phase. The key "bindPassword" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. insecure boolean insecure, if true, indicates the connection should not use TLS WARNING: Should not be set to true with the URL scheme "ldaps://" as "ldaps://" URLs always attempt to connect using TLS, even when insecure is set to true When true , "ldap://" URLS connect insecurely. When false , "ldap://" URLs are upgraded to a TLS connection using StartTLS as specified in https://tools.ietf.org/html/rfc2830 . url string url is an RFC 2255 URL which specifies the LDAP search parameters to use. The syntax of the URL is: ldap://host:port/basedn?attribute?scope?filter 19.1.23. .spec.identityProviders[].ldap.attributes Description attributes maps LDAP attributes to identities Type object Property Type Description email array (string) email is the list of attributes whose values should be used as the email address. Optional. If unspecified, no email is set for the identity id array (string) id is the list of attributes whose values should be used as the user ID. Required. First non-empty attribute is used. At least one attribute is required. If none of the listed attribute have a value, authentication fails. LDAP standard identity attribute is "dn" name array (string) name is the list of attributes whose values should be used as the display name. Optional. If unspecified, no display name is set for the identity LDAP standard display name attribute is "cn" preferredUsername array (string) preferredUsername is the list of attributes whose values should be used as the preferred username. LDAP standard login attribute is "uid" 19.1.24. .spec.identityProviders[].ldap.bindPassword Description bindPassword is an optional reference to a secret by name containing a password to bind with during the search phase. The key "bindPassword" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.25. .spec.identityProviders[].ldap.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.26. .spec.identityProviders[].openID Description openID enables user authentication using OpenID credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. claims object claims mappings clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. extraAuthorizeParameters object (string) extraAuthorizeParameters are any custom parameters to add to the authorize request. extraScopes array (string) extraScopes are any scopes to request in addition to the standard "openid" scope. issuer string issuer is the URL that the OpenID Provider asserts as its Issuer Identifier. It must use the https scheme with no query or fragment component. 19.1.27. .spec.identityProviders[].openID.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.28. .spec.identityProviders[].openID.claims Description claims mappings Type object Property Type Description email array (string) email is the list of claims whose values should be used as the email address. Optional. If unspecified, no email is set for the identity groups array (string) groups is the list of claims value of which should be used to synchronize groups from the OIDC provider to OpenShift for the user. If multiple claims are specified, the first one with a non-empty value is used. name array (string) name is the list of claims whose values should be used as the display name. Optional. If unspecified, no display name is set for the identity preferredUsername array (string) preferredUsername is the list of claims whose values should be used as the preferred username. If unspecified, the preferred username is determined from the value of the sub claim 19.1.29. .spec.identityProviders[].openID.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.30. .spec.identityProviders[].requestHeader Description requestHeader enables user authentication using request header credentials Type object Property Type Description ca object ca is a required reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. Specifically, it allows verification of incoming requests to prevent header spoofing. The key "ca.crt" is used to locate the data. If the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. The namespace for this config map is openshift-config. challengeURL string challengeURL is a URL to redirect unauthenticated /authorize requests to Unauthenticated requests from OAuth clients which expect WWW-Authenticate challenges will be redirected here. USD{url} is replaced with the current URL, escaped to be safe in a query parameter https://www.example.com/sso-login?then=USD{url} USD{query} is replaced with the current query string https://www.example.com/auth-proxy/oauth/authorize?USD{query} Required when challenge is set to true. clientCommonNames array (string) clientCommonNames is an optional list of common names to require a match from. If empty, any client certificate validated against the clientCA bundle is considered authoritative. emailHeaders array (string) emailHeaders is the set of headers to check for the email address headers array (string) headers is the set of headers to check for identity information loginURL string loginURL is a URL to redirect unauthenticated /authorize requests to Unauthenticated requests from OAuth clients which expect interactive logins will be redirected here USD{url} is replaced with the current URL, escaped to be safe in a query parameter https://www.example.com/sso-login?then=USD{url} USD{query} is replaced with the current query string https://www.example.com/auth-proxy/oauth/authorize?USD{query} Required when login is set to true. nameHeaders array (string) nameHeaders is the set of headers to check for the display name preferredUsernameHeaders array (string) preferredUsernameHeaders is the set of headers to check for the preferred username 19.1.31. .spec.identityProviders[].requestHeader.ca Description ca is a required reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. Specifically, it allows verification of incoming requests to prevent header spoofing. The key "ca.crt" is used to locate the data. If the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.32. .spec.templates Description templates allow you to customize pages like the login page. Type object Property Type Description error object error is the name of a secret that specifies a go template to use to render error pages during the authentication or grant flow. The key "errors.html" is used to locate the template data. If specified and the secret or expected key is not found, the default error page is used. If the specified template is not valid, the default error page is used. If unspecified, the default error page is used. The namespace for this secret is openshift-config. login object login is the name of a secret that specifies a go template to use to render the login page. The key "login.html" is used to locate the template data. If specified and the secret or expected key is not found, the default login page is used. If the specified template is not valid, the default login page is used. If unspecified, the default login page is used. The namespace for this secret is openshift-config. providerSelection object providerSelection is the name of a secret that specifies a go template to use to render the provider selection page. The key "providers.html" is used to locate the template data. If specified and the secret or expected key is not found, the default provider selection page is used. If the specified template is not valid, the default provider selection page is used. If unspecified, the default provider selection page is used. The namespace for this secret is openshift-config. 19.1.33. .spec.templates.error Description error is the name of a secret that specifies a go template to use to render error pages during the authentication or grant flow. The key "errors.html" is used to locate the template data. If specified and the secret or expected key is not found, the default error page is used. If the specified template is not valid, the default error page is used. If unspecified, the default error page is used. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.34. .spec.templates.login Description login is the name of a secret that specifies a go template to use to render the login page. The key "login.html" is used to locate the template data. If specified and the secret or expected key is not found, the default login page is used. If the specified template is not valid, the default login page is used. If unspecified, the default login page is used. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.35. .spec.templates.providerSelection Description providerSelection is the name of a secret that specifies a go template to use to render the provider selection page. The key "providers.html" is used to locate the template data. If specified and the secret or expected key is not found, the default provider selection page is used. If the specified template is not valid, the default provider selection page is used. If unspecified, the default provider selection page is used. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.36. .spec.tokenConfig Description tokenConfig contains options for authorization and access tokens Type object Property Type Description accessTokenInactivityTimeout string accessTokenInactivityTimeout defines the token inactivity timeout for tokens granted by any client. The value represents the maximum amount of time that can occur between consecutive uses of the token. Tokens become invalid if they are not used within this temporal window. The user will need to acquire a new token to regain access once a token times out. Takes valid time duration string such as "5m", "1.5h" or "2h45m". The minimum allowed value for duration is 300s (5 minutes). If the timeout is configured per client, then that value takes precedence. If the timeout value is not specified and the client does not override the value, then tokens are valid until their lifetime. WARNING: existing tokens' timeout will not be affected (lowered) by changing this value accessTokenInactivityTimeoutSeconds integer accessTokenInactivityTimeoutSeconds - DEPRECATED: setting this field has no effect. accessTokenMaxAgeSeconds integer accessTokenMaxAgeSeconds defines the maximum age of access tokens 19.1.37. .status Description status holds observed values from the cluster. They may not be overridden. Type object 19.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/oauths DELETE : delete collection of OAuth GET : list objects of kind OAuth POST : create an OAuth /apis/config.openshift.io/v1/oauths/{name} DELETE : delete an OAuth GET : read the specified OAuth PATCH : partially update the specified OAuth PUT : replace the specified OAuth /apis/config.openshift.io/v1/oauths/{name}/status GET : read status of the specified OAuth PATCH : partially update status of the specified OAuth PUT : replace status of the specified OAuth 19.2.1. /apis/config.openshift.io/v1/oauths HTTP method DELETE Description delete collection of OAuth Table 19.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OAuth Table 19.2. HTTP responses HTTP code Reponse body 200 - OK OAuthList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuth Table 19.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.4. Body parameters Parameter Type Description body OAuth schema Table 19.5. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 201 - Created OAuth schema 202 - Accepted OAuth schema 401 - Unauthorized Empty 19.2.2. /apis/config.openshift.io/v1/oauths/{name} Table 19.6. Global path parameters Parameter Type Description name string name of the OAuth HTTP method DELETE Description delete an OAuth Table 19.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 19.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuth Table 19.9. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuth Table 19.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.11. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuth Table 19.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.13. Body parameters Parameter Type Description body OAuth schema Table 19.14. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 201 - Created OAuth schema 401 - Unauthorized Empty 19.2.3. /apis/config.openshift.io/v1/oauths/{name}/status Table 19.15. Global path parameters Parameter Type Description name string name of the OAuth HTTP method GET Description read status of the specified OAuth Table 19.16. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OAuth Table 19.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.18. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OAuth Table 19.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.20. Body parameters Parameter Type Description body OAuth schema Table 19.21. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 201 - Created OAuth schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/config_apis/oauth-config-openshift-io-v1 |
8.23. crash | 8.23. crash 8.23.1. RHEA-2013:1565 - crash enhancement update Updated crash packages that add various enhancements are now available for Red Hat Enterprise Linux 6. The crash packages provide a self-contained tool that can be used to investigate live systems and kernel core dumps created from the netdump, diskdump, kdump, and Xen/KVM "virsh dump" facilities from Red Hat Enterprise Linux. Enhancements BZ# 902141 Currently, dump files created by the makedumpfile utility using the snappy compression format are now readable by the crash utility. The snappy format is suitable for the crash dump mechanism that requires stable performance in any situation with enterprise application use. BZ# 902144 With this update, dump files created by the makedumpfile utility using the LZO compression format are now readable by the crash utility. The LZO compression format is fast and stable for randomized data. BZ# 1006622 This update adds support for compressed dump files created by the makedumpfile utility that were generated on systems with physical memory requiring more than 44 bits. BZ# 1017930 This update fixes faulty panic-task backtraces generated by the bt command in KVM guest dump files. The bt command now shows a trace when the guest operating system is panicking. BZ# 1019483 This update fixes the CPU number display on systems with 255 or more CPUs during the initialization, by the set command, the ps command, and by all commands that display the per-task header consisting of the task address, PID, CPU and command name. Without the patch, for CPU 255, the sys command displays "NO_PROC_ID", and the other commands show a "-" for the CPU number; for CPU numbers greater than 255, garbage values would be displayed in the CPU number field. Users of crash are advised to upgrade to these updated packages, which add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/crash |
Chapter 7. OperatorCondition [operators.coreos.com/v2] | Chapter 7. OperatorCondition [operators.coreos.com/v2] Description OperatorCondition is a Custom Resource of type OperatorCondition which is used to convey information to OLM about the state of an operator. Type object Required metadata 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorConditionSpec allows an operator to report state to OLM and provides cluster admin with the ability to manually override state reported by the operator. status object OperatorConditionStatus allows OLM to convey which conditions have been observed. 7.1.1. .spec Description OperatorConditionSpec allows an operator to report state to OLM and provides cluster admin with the ability to manually override state reported by the operator. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } deployments array (string) overrides array overrides[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } serviceAccounts array (string) 7.1.2. .spec.conditions Description Type array 7.1.3. .spec.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 7.1.4. .spec.overrides Description Type array 7.1.5. .spec.overrides[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 7.1.6. .status Description OperatorConditionStatus allows OLM to convey which conditions have been observed. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 7.1.7. .status.conditions Description Type array 7.1.8. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 7.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v2/operatorconditions GET : list objects of kind OperatorCondition /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions DELETE : delete collection of OperatorCondition GET : list objects of kind OperatorCondition POST : create an OperatorCondition /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name} DELETE : delete an OperatorCondition GET : read the specified OperatorCondition PATCH : partially update the specified OperatorCondition PUT : replace the specified OperatorCondition /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name}/status GET : read status of the specified OperatorCondition PATCH : partially update status of the specified OperatorCondition PUT : replace status of the specified OperatorCondition 7.2.1. /apis/operators.coreos.com/v2/operatorconditions Table 7.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind OperatorCondition Table 7.2. HTTP responses HTTP code Reponse body 200 - OK OperatorConditionList schema 401 - Unauthorized Empty 7.2.2. /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions Table 7.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OperatorCondition Table 7.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorCondition Table 7.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.8. HTTP responses HTTP code Reponse body 200 - OK OperatorConditionList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorCondition Table 7.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.10. Body parameters Parameter Type Description body OperatorCondition schema Table 7.11. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 201 - Created OperatorCondition schema 202 - Accepted OperatorCondition schema 401 - Unauthorized Empty 7.2.3. /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name} Table 7.12. Global path parameters Parameter Type Description name string name of the OperatorCondition namespace string object name and auth scope, such as for teams and projects Table 7.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OperatorCondition Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.15. Body parameters Parameter Type Description body DeleteOptions schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorCondition Table 7.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.18. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorCondition Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.20. Body parameters Parameter Type Description body Patch schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorCondition Table 7.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.23. Body parameters Parameter Type Description body OperatorCondition schema Table 7.24. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 201 - Created OperatorCondition schema 401 - Unauthorized Empty 7.2.4. /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name}/status Table 7.25. Global path parameters Parameter Type Description name string name of the OperatorCondition namespace string object name and auth scope, such as for teams and projects Table 7.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified OperatorCondition Table 7.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.28. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OperatorCondition Table 7.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.30. Body parameters Parameter Type Description body Patch schema Table 7.31. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OperatorCondition Table 7.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.33. Body parameters Parameter Type Description body OperatorCondition schema Table 7.34. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 201 - Created OperatorCondition schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operatorhub_apis/operatorcondition-operators-coreos-com-v2 |
Chapter 45. Networking | Chapter 45. Networking Cisco usNIC driver Cisco Unified Communication Manager (UCM) servers have an optional feature to provide a Cisco proprietary User Space Network Interface Controller (usNIC), which allows performing Remote Direct Memory Access (RDMA)-like operations for user-space applications. The libusnic_verbs driver, which is available as a Technology Preview, makes it possible to use usNIC devices via standard InfiniBand RDMA programming based on the Verbs API. (BZ#916384) Cisco VIC kernel driver The Cisco VIC Infiniband kernel driver, which is available as a Technology Preview, allows the use of Remote Directory Memory Access (RDMA)-like semantics on proprietary Cisco architectures. (BZ#916382) Trusted Network Connect Trusted Network Connect, available as a Technology Preview, is used with existing network access control (NAC) solutions, such as TLS, 802.1X, or IPsec to integrate endpoint posture assessment; that is, collecting an endpoint's system information (such as operating system configuration settings, installed packages, and others, termed as integrity measurements). Trusted Network Connect is used to verify these measurements against network access policies before allowing the endpoint to access the network. (BZ#755087) SR-IOV functionality in the qlcnic driver Support for Single-Root I/O virtualization (SR-IOV) has been added to the qlcnic driver as a Technology Preview. Support for this functionality will be provided directly by QLogic, and customers are encouraged to provide feedback to QLogic and Red Hat. Other functionality in the qlcnic driver remains fully supported. (BZ#1259547) The flower classifier with off-loading support flower is a Traffic Control (TC) classifier intended to allow users to configure matching on well-known packet fields for various protocols. It is intended to make it easier to configure rules over the u32 classifier for complex filtering and classification tasks. flower also supports the ability to off-load classification and action rules to underlying hardware if the hardware supports it. The flower TC classifier is now provided as a Technology Preview. (BZ#1393375) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/technology_previews_networking |
Chapter 1. Overview | Chapter 1. Overview AMQ Core Protocol JMS is a Java Message Service (JMS) 2.0 client for use in messaging applications that send and receive Artemis Core Protocol messages. AMQ Core Protocol JMS is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.10 Release Notes . AMQ Core Protocol JMS is based on the JMS implementation from Apache ActiveMQ Artemis . For more information about the JMS API, see the JMS API reference and the JMS tutorial . 1.1. Key features JMS 1.1 and 2.0 compatible SSL/TLS for secure communication Automatic reconnect and failover Distributed transactions (XA) Pure-Java implementation 1.2. Supported standards and protocols AMQ Core Protocol JMS supports the following industry-recognized standards and network protocols: Version 2.0 of the Java Message Service API Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Modern TCP with IPv6 1.3. Supported configurations Refer to Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal for current information regarding AMQ Core Protocol JMS supported configurations. 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description ConnectionFactory An entry point for creating connections. Connection A channel for communication between two peers on a network. It contains sessions. Session A context for producing and consuming messages. It contains message producers and consumers. MessageProducer A channel for sending messages to a destination. It has a target destination. MessageConsumer A channel for receiving messages from a destination. It has a source destination. Destination A named location for messages, either a queue or a topic. Queue A stored sequence of messages. Topic A stored sequence of messages for multicast distribution. Message An application-specific piece of information. AMQ Core Protocol JMS sends and receives messages . Messages are transferred between connected peers using message producers and consumers . Producers and consumers are established over sessions . Sessions are established over connections . Connections are created by connection factories . A sending peer creates a producer to send messages. The producer has a destination that identifies a target queue or topic at the remote peer. A receiving peer creates a consumer to receive messages. Like the producer, the consumer has a destination that identifies a source queue or topic at the remote peer. A destination is either a queue or a topic . In JMS, queues and topics are client-side representations of named broker entities that hold messages. A queue implements point-to-point semantics. Each message is seen by only one consumer, and the message is removed from the queue after it is read. A topic implements publish-subscribe semantics. Each message is seen by multiple consumers, and the message remains available to other consumers after it is read. See the JMS tutorial for more information. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir> | [
"cd <project-dir>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_core_protocol_jms_client/overview |
7.19. RHEA-2014:1471 - new package: scap-security-guide | 7.19. RHEA-2014:1471 - new package: scap-security-guide A new scap-security-guide package is now available for Red Hat Enterprise Linux 6. The scap-security-guide package provides a SCAP Security Guide (SSG) project's guide for configuration of the system from the final system's security point of view. The guidance is specified in the Security Content Automation Protocol (SCAP) format and constitutes a catalog of practical hardening advice, linked to government requirements where applicable. The project bridges the gap between generalized policy requirements and specific implementation guidelines. The Red Hat Enterprise Linux 6 system administrator can use the oscap command-line tool from the openscap-utils package to verify that the system conforms to the provided guideline. For further information. see the scap-security-guide(8) manual page. This enhancement update adds the scap-security-guide package to Red Hat Enterprise Linux 6. (BZ#1066390) All users who require scap-security-guide are advised to install this new package. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1471 |
Chapter 7. Security | Chapter 7. Security AMQ JMS has a range of security-related configuration options that can be leveraged according to your application's needs. Basic user credentials such as username and password should be passed directly to the ConnectionFactory when creating the Connection within the application. However, if you are using the no-argument factory method, it is also possible to supply user credentials in the connection URI. For more information, see the Section 5.1, "JMS options" section. Another common security consideration is use of SSL/TLS. The client connects to servers over an SSL/TLS transport when the amqps URI scheme is specified in the connection URI , with various options available to configure behavior. For more information, see the Section 5.3, "SSL/TLS options" section. In concert with the earlier items, it may be desirable to restrict the client to allow use of only particular SASL mechanisms from those that may be offered by a server, rather than selecting from all it supports. For more information, see the Section 5.4, "AMQP options" section. Applications calling getObject() on a received ObjectMessage may wish to restrict the types created during deserialization. Note that message bodies composed using the AMQP type system do not use the ObjectInputStream mechanism and therefore do not require this precaution. For more information, see the the section called "Deserialization policy options" section. 7.1. Enabling OpenSSL support SSL/TLS connections can be configured to use a native OpenSSL implementation for improved performance. To use OpenSSL, the transport.useOpenSSL option must be enabled, and an OpenSSL support library must be available on the classpath. To use the system-installed OpenSSL libraries on Red Hat Enterprise Linux, install the openssl and apr RPM packages and add the following dependency to your POM file: Example: Adding native OpenSSL support <dependency> <groupId>io.netty</groupId> <artifactId>netty-tcnative</artifactId> <version>2.0.34.Final-redhat-00001</version> <classifier>linux-x86_64-fedora</classifier> </dependency> A list of OpenSSL library implementations is available from the Netty project. 7.2. Authenticating using Kerberos The client can be configured to authenticate using Kerberos when used with an appropriately configured server. To enable Kerberos, use the following steps. Configure the client to use the GSSAPI mechanism for SASL authentication using the amqp.saslMechanisms URI option. Set the java.security.auth.login.config system property to the path of a JAAS login configuration file containing appropriate configuration for a Kerberos LoginModule . The login configuration file might look like the following example: The precise configuration used will depend on how you wish the credentials to be established for the connection, and the particular LoginModule in use. For details of the Oracle Krb5LoginModule , see the Oracle Krb5LoginModule class reference . For details of the IBM Java 8 Krb5LoginModule , see the IBM Krb5LoginModule class reference . It is possible to configure a LoginModule to establish the credentials to use for the Kerberos process, such as specifying a principal and whether to use an existing ticket cache or keytab. If, however, the LoginModule configuration does not provide the means to establish all necessary credentials, it may then request and be passed the username and password values from the client Connection object if they were either supplied when creating the Connection using the ConnectionFactory or previously configured via its URI options. Note that Kerberos is supported only for authentication purposes. Use SSL/TLS connections for encryption. The following connection URI options can be used to influence the Kerberos authentication process. sasl.options.configScope The name of the login configuration entry used to authenticate. The default is amqp-jms-client . sasl.options.protocol The protocol value used during the GSSAPI SASL process. The default is amqp . sasl.options.serverName The serverName value used during the GSSAPI SASL process. The default is the server hostname from the connection URI. Similar to the amqp. and transport. options detailed previously, these options must be specified on a per-host basis or as all-host nested options in a failover URI. | [
"<dependency> <groupId>io.netty</groupId> <artifactId>netty-tcnative</artifactId> <version>2.0.34.Final-redhat-00001</version> <classifier>linux-x86_64-fedora</classifier> </dependency>",
"amqp://myhost:5672?amqp.saslMechanisms=GSSAPI failover:(amqp://myhost:5672?amqp.saslMechanisms=GSSAPI)",
"-Djava.security.auth.login.config=<login-config-file>",
"amqp-jms-client { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true; };"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_jms_client/security |
Chapter 8. Updated Packages | Chapter 8. Updated Packages 8.1. abrt 8.1.1. RHBA-2013:1586 - abrt, libreport and btparser bug fix and enhancement update Updated abrt, libreport, and btparser packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. ABRT is a tool to help users to detect defects in applications and to create a problem report with all the information needed by a maintainer to fix it. ABRT uses a plug-in system to extend its functionality. The libreport libraries provide an API for reporting different problems in applications to different bug targets like Bugzilla, ftp, and trac. The btparser utility is a backtrace parser and analyzer library, which works with backtraces produced by the GNU Project Debugger. It can parse a text file with a backtrace to a tree of C structures, allowing to analyze the threads and frames of the backtrace and process them. Bug Fixes BZ# 854668 If the /etc/abrt/abrt.conf file was modified so that the "DumpLocation" and "WatchCrashdumpArchiveDir" variables referred to the same directory, the ABRT utility tried to process the files in that directory as both archives and new problem directories, which led to unpredictable results. With this update, ABRT refuses to start if such misconfiguration is detected. BZ# 896090 While creating a case, the reporter-rhtsupport utility sent the operation system (OS) version value which RHT customer center server did not accept. Consequently, a new case failed to be created and an error message was returned. With this update, suffixes such as "Beta" in the OS version value are not stripped, RHT customer center server accepts the version value, and a case is created. BZ# 952773 Prior to this update, the abrt-watch-log and abrt-dump-oops utilities were creating too many new problem directories when a kernel error occurred periodically. As a consequence, the user was flooded with problem reports and the /var partition could overflow. To fix this bug, abrt-dump-oops has been changed to ignore all additional problems for a few minutes after it sees 5 or more of them. As a result, the user is not flooded with problem reports. Enhancements BZ# 952704 The Red Hat Support tool required an API for querying crashes caught by ABRT. With this update, python API for ABRT has been provided and it is now possible to use python API to query bugs caught by ABRT. BZ# 961231 There is a high probability that users who do not use the graphical environment (headless systems) will miss the problems detected by the ABRT utility. When the user installs the abrt-console-notification packages, they now see a warning message in the console regarding new problems detected since the last login. All users of abrt, libreport and btparser are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/ch08 |
23.9. NUMA Node Tuning | 23.9. NUMA Node Tuning After NUMA node tuning is done using virsh edit , the following domain XML parameters are affected: <domain> ... <numatune> <memory mode="strict" nodeset="1-4,^3"/> </numatune> ... </domain> Figure 23.11. NUMA node tuning Although all are optional, the components of this section of the domain XML are as follows: Table 23.6. NUMA node tuning elements Element Description <numatune> Provides details of how to tune the performance of a NUMA host physical machine by controlling NUMA policy for domain processes. <memory> Specifies how to allocate memory for the domain processes on a NUMA host physical machine. It contains several optional attributes. The mode attribute can be set to interleave , strict , or preferred . If no value is given it defaults to strict . The nodeset attribute specifies the NUMA nodes, using the same syntax as the cpuset attribute of the <vcpu> element. Attribute placement can be used to indicate the memory placement mode for the domain process. Its value can be either static or auto . If the <nodeset> attribute is specified it defaults to the <placement> of <vcpu> , or static . auto indicates the domain process will only allocate memory from the advisory nodeset returned from querying numad and the value of the nodeset attribute will be ignored if it is specified. If the <placement> attribute in vcpu is set to auto , and the <numatune> attribute is not specified, a default <numatune> with <placement> auto and strict mode will be added implicitly. | [
"<domain> <numatune> <memory mode=\"strict\" nodeset=\"1-4,^3\"/> </numatune> </domain>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-numa_node_tuning |
Chapter 9. SecurityContextConstraints [security.openshift.io/v1] | Chapter 9. SecurityContextConstraints [security.openshift.io/v1] Description SecurityContextConstraints governs the ability to make requests that affect the SecurityContext that will be applied to a container. For historical reasons SCC was exposed under the core Kubernetes API group. That exposure is deprecated and will be removed in a future release - users should instead use the security.openshift.io group to manage SecurityContextConstraints. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required allowHostDirVolumePlugin allowHostIPC allowHostNetwork allowHostPID allowHostPorts allowPrivilegedContainer readOnlyRootFilesystem 9.1. Specification Property Type Description allowHostDirVolumePlugin boolean AllowHostDirVolumePlugin determines if the policy allow containers to use the HostDir volume plugin allowHostIPC boolean AllowHostIPC determines if the policy allows host ipc in the containers. allowHostNetwork boolean AllowHostNetwork determines if the policy allows the use of HostNetwork in the pod spec. allowHostPID boolean AllowHostPID determines if the policy allows host pid in the containers. allowHostPorts boolean AllowHostPorts determines if the policy allows host ports in the containers. allowPrivilegeEscalation `` AllowPrivilegeEscalation determines if a pod can request to allow privilege escalation. If unspecified, defaults to true. allowPrivilegedContainer boolean AllowPrivilegedContainer determines if a container can request to be run as privileged. allowedCapabilities `` AllowedCapabilities is a list of capabilities that can be requested to add to the container. Capabilities in this field maybe added at the pod author's discretion. You must not list a capability in both AllowedCapabilities and RequiredDropCapabilities. To allow all capabilities you may use '*'. allowedFlexVolumes `` AllowedFlexVolumes is a whitelist of allowed Flexvolumes. Empty or nil indicates that all Flexvolumes may be used. This parameter is effective only when the usage of the Flexvolumes is allowed in the "Volumes" field. allowedUnsafeSysctls `` AllowedUnsafeSysctls is a list of explicitly allowed unsafe sysctls, defaults to none. Each entry is either a plain sysctl name or ends in " " in which case it is considered as a prefix of allowed sysctls. Single * means all unsafe sysctls are allowed. Kubelet has to whitelist all allowed unsafe sysctls explicitly to avoid rejection. Examples: e.g. "foo/ " allows "foo/bar", "foo/baz", etc. e.g. "foo.*" allows "foo.bar", "foo.baz", etc. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources defaultAddCapabilities `` DefaultAddCapabilities is the default set of capabilities that will be added to the container unless the pod spec specifically drops the capability. You may not list a capabiility in both DefaultAddCapabilities and RequiredDropCapabilities. defaultAllowPrivilegeEscalation `` DefaultAllowPrivilegeEscalation controls the default setting for whether a process can gain more privileges than its parent process. forbiddenSysctls `` ForbiddenSysctls is a list of explicitly forbidden sysctls, defaults to none. Each entry is either a plain sysctl name or ends in " " in which case it is considered as a prefix of forbidden sysctls. Single * means all sysctls are forbidden. Examples: e.g. "foo/ " forbids "foo/bar", "foo/baz", etc. e.g. "foo.*" forbids "foo.bar", "foo.baz", etc. fsGroup `` FSGroup is the strategy that will dictate what fs group is used by the SecurityContext. groups `` The groups that have permission to use this security context constraints kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata priority `` Priority influences the sort order of SCCs when evaluating which SCCs to try first for a given pod request based on access in the Users and Groups fields. The higher the int, the higher priority. An unset value is considered a 0 priority. If scores for multiple SCCs are equal they will be sorted from most restrictive to least restrictive. If both priorities and restrictions are equal the SCCs will be sorted by name. readOnlyRootFilesystem boolean ReadOnlyRootFilesystem when set to true will force containers to run with a read only root file system. If the container specifically requests to run with a non-read only root file system the SCC should deny the pod. If set to false the container may run with a read only root file system if it wishes but it will not be forced to. requiredDropCapabilities `` RequiredDropCapabilities are the capabilities that will be dropped from the container. These are required to be dropped and cannot be added. runAsUser `` RunAsUser is the strategy that will dictate what RunAsUser is used in the SecurityContext. seLinuxContext `` SELinuxContext is the strategy that will dictate what labels will be set in the SecurityContext. seccompProfiles `` SeccompProfiles lists the allowed profiles that may be set for the pod or container's seccomp annotations. An unset (nil) or empty value means that no profiles may be specifid by the pod or container. The wildcard '*' may be used to allow all profiles. When used to generate a value for a pod the first non-wildcard profile will be used as the default. supplementalGroups `` SupplementalGroups is the strategy that will dictate what supplemental groups are used by the SecurityContext. users `` The users who have permissions to use this security context constraints volumes `` Volumes is a white list of allowed volume plugins. FSType corresponds directly with the field names of a VolumeSource (azureFile, configMap, emptyDir). To allow all volumes you may use "*". To allow no volumes, set to ["none"]. 9.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/securitycontextconstraints DELETE : delete collection of SecurityContextConstraints GET : list objects of kind SecurityContextConstraints POST : create SecurityContextConstraints /apis/security.openshift.io/v1/watch/securitycontextconstraints GET : watch individual changes to a list of SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead. /apis/security.openshift.io/v1/securitycontextconstraints/{name} DELETE : delete SecurityContextConstraints GET : read the specified SecurityContextConstraints PATCH : partially update the specified SecurityContextConstraints PUT : replace the specified SecurityContextConstraints /apis/security.openshift.io/v1/watch/securitycontextconstraints/{name} GET : watch changes to an object of kind SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 9.2.1. /apis/security.openshift.io/v1/securitycontextconstraints Table 9.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of SecurityContextConstraints Table 9.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind SecurityContextConstraints Table 9.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.5. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraintsList schema 401 - Unauthorized Empty HTTP method POST Description create SecurityContextConstraints Table 9.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.7. Body parameters Parameter Type Description body SecurityContextConstraints schema Table 9.8. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 201 - Created SecurityContextConstraints schema 202 - Accepted SecurityContextConstraints schema 401 - Unauthorized Empty 9.2.2. /apis/security.openshift.io/v1/watch/securitycontextconstraints Table 9.9. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead. Table 9.10. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.3. /apis/security.openshift.io/v1/securitycontextconstraints/{name} Table 9.11. Global path parameters Parameter Type Description name string name of the SecurityContextConstraints Table 9.12. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete SecurityContextConstraints Table 9.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.14. Body parameters Parameter Type Description body DeleteOptions schema Table 9.15. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified SecurityContextConstraints Table 9.16. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.17. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified SecurityContextConstraints Table 9.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.19. Body parameters Parameter Type Description body Patch schema Table 9.20. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified SecurityContextConstraints Table 9.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.22. Body parameters Parameter Type Description body SecurityContextConstraints schema Table 9.23. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 201 - Created SecurityContextConstraints schema 401 - Unauthorized Empty 9.2.4. /apis/security.openshift.io/v1/watch/securitycontextconstraints/{name} Table 9.24. Global path parameters Parameter Type Description name string name of the SecurityContextConstraints Table 9.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 9.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/security_apis/securitycontextconstraints-security-openshift-io-v1 |
Chapter 2. Known issues | Chapter 2. Known issues This section lists known issues with Red Hat CodeReady Workspaces 2.1. Where available, workaround suggestions are provided. 2.1. Workspaces based on java-eap-maved created in CodeReady Workspaces 2.0 fail to start after the update to 2.1.1 on OpenShift 3.11 Users with the cluster-admin role who use OpenShift 3.11 may experience the 2.0 version java-eap-maved -based workspaces with TLS and OpenShift OAuth support fail to start after the product server is updated to version 2.1.1. Users are then not permitted to access the old Persistent Volume. To work around this issue, create a new 2.1.1 workspace using the same devfile that was used to create the workspace in the version. 2.2. Openshift Connector plug-in requires manual connecting to the target cluster By default, the Openshift Connector plug-in logs into the cluster as inClusterUser , which may not have the manage project permission. This causes an error message to be displayed when a new project is being created using Openshift Application Explorer: To work around this issue, log out from the local cluster and relog in to OpenShift cluster using the OpenShift user's credentials. 2.3. Missing debug configuration for the Node JS Config Map devfile workspaces Absence of the debug configuration, situated in the debug panel of a created workspace, when the Node JS Config Map devfile is used for a new workspace creation. 2.4. CodeReady Workspaces is failing to shut down after executing crwctl server:stop The dedicated crwctl command crwctl server:stop is unable to shut down the CodeReady Workspaces server and instead fails with a timeout and displays the following error message: To work around the issue, execute crwctl server:stop again. 2.5. The crwctl workspace:inject command does not work in workspaces with OpenShift OAuth support The crwctl workspace:inject command causes the error during the workspace Pod localization in workspaces created with OpenShift OAuth support. To work around the issue, use oc login command inside the affected container instead. 2.6. Openshift Projects remain present after a namespace deletion CodeReady Workspaces 2.1.1 When a workspace has been created in a dedicated namespace that is later entirely deleted, the corresponding OpenShift project needs to be removed manually to complete the deletion process. To work around the issue: delete the remaining empty project from the OpenShift console: 2.7. The crwctl server:delete uninstallation command does not remove the OpenShift project After using the crwctl server:delete command, the OpenShift project that used to host the CodeReady Workspaces instance remains. This makes it impossible to install a new CodeReady Workspaces instance into the default namespace, which still exists. To uninstall CodeReady Workspaces completely, manually remove the namespace. To work around the issue: Stop the Red Hat CodeReady Workspaces Server: Obtain the name of the CodeReady Workspaces namespace: Remove CodeReady Workspaces from the cluster: This removes all CodeReady Workspaces installations from the cluster. Delete the checluster object and the codeready-workspaces resource: <openshift_namespace> is the name of the OpenShift project where CodeReady Workspaces is deployed. Delete the OpenShift namespace: 2.8. Uninstallation command crwctl server:delete might make OpenShift instance unusable Creation of the CodeReady Workspaces cluster, codeready-workspaces , in a namespace can be affected by use of the crwctl server:delete uninstallation command. To work around the issue: Patch the Custom Resource Definition: 2.9. An embedded application of the "Java EAP Maven" stack tends to fail at launch in the debug mode An embedded application of the Java EAP Maven stack in the debug mode tends to fail. The dialog window with application URL is already displayed, but the application, according to the terminal output, is still starting. The use of the URL link displayed leads to an error. 2.10. Entering a workspace fails after restarting it Attempting to restart a workspace and re-enter it fails, and an error message is displayed instead. To work around this issue, restart the workspace again. 2.11. Workspace Cap and Workspace RAM Cap organization restrictions do not work The Workspace Cap and Workspace RAM cap functions, which control the maximum number of workspaces for an organization and the maximum RAM that organization workspaces can use, currently do not work. 2.12. The terminal tab for a workspace in some cases does not open Devfiles contain a set of predefined commands that can be executed in workspaces started using devfiles from the devfile registry. However, when a command defined by a devfile is executed from the workspace, the terminal, in which the commands normally run, does not open. The work around this issue, do not open the same workspace link in two different browsers. 2.13. Workspaces are not stopped by the idling timeout Due to CodeReady Workspaces and OpenShift OAuth integration, workspaces are not stopped by the idling timeout, when the workspace is located in a user's Pod. To work around this issue, disable the stopping of idling workspace by timeout feature for the workspaces with OpenShift OAuth integration: 2.14. Error highlighting and code completion do not work in a Go devfile To workaround this issue, update the Go language server plug-in to the latest version. 2.15. Debugging utility does not work correctly on its first run in the Go devfile workspace In workspaces based on the Go devfile, an error notification is displayed during the start of the Debug current file configuration. To work around this issue, execute the predefined Run current file command first, then repeat the debugging procedure. | [
"Failed to create Project with error 'Error: Command failed: \"/tmp/vscode-unpacked/redhat.vscode-openshift -connector.latest.qvkozqtkba.openshift-connector-0.1.4-523.vsix/extension/out/tools/linux/odo\" project create test-project [✗] projectrequests.project.openshift.io is forbidden",
"› Error: E_SHUTDOWN_CHE_SERVER_FAIL - Failed to shutdown CodeReady Workspaces server. E_CHE_API_NO_RESPONSE - Endpoint: http://codeready-ndp-test.apps.crw.codereadyqe.com/api/system/stop?shutdown=true - Error message: timeout of › 3000ms exceeded",
"crwctl workspace:inject -n codeready-tls-oauth -k ✔ Verify if namespace codeready-tls-oauth exists ✖ Verify if the workspaces is running No workspace pod is found Injecting configurations › Error: No workspace pod is found",
"oc delete project <projectname>",
"crwctl server:stop",
"oc get checluster --all-namespaces -o=jsonpath=\"{.items[*].metadata.namespace}\"",
"crwctl server:delete -n <namespace>",
"oc delete checluster codeready-workspaces --namespace= <openshift_namespace>",
"oc delete project <openshift_namespace>",
"oc patch customresourcedefinition/checlusters.org.eclipse.che -p '{ \"metadata\": { \"finalizers\": null }}' --type merge",
"oc patch checluster/eclipse-che --patch \"{\\\"spec\\\":{\\\"server\\\":{\\\"customCheProperties\\\": {\\\"CHE_LIMITS_WORKSPACE_IDLE_TIMEOUT\\\": \\\"-1\\\"}}}}\" --type=merge -n che"
] | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/release_notes_and_known_issues/known-issues |
Part II. Deploying Red Hat build of Kogito microservices on Red Hat OpenShift Container Platform | Part II. Deploying Red Hat build of Kogito microservices on Red Hat OpenShift Container Platform As a developer of business decisions and processes, you can deploy Red Hat build of Kogito microservices on Red Hat OpenShift Container Platform for cloud implementation. The RHPAM Kogito Operator automates many of the deployment steps for you or guides you through the deployment process. Prerequisites Red Hat OpenShift Container Platform 4.6 or 4.7 is installed. The OpenShift project for the deployment is created. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_process_automation_manager/assembly-deploying-kogito-microservices-on-openshift |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/7.4.0_release_notes/providing-feedback-on-red-hat-documentation_default |
Chapter 1. Power monitoring for Red Hat OpenShift release notes | Chapter 1. Power monitoring for Red Hat OpenShift release notes Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Power monitoring for Red Hat OpenShift enables you to monitor the power usage of workloads and identify the most power-consuming namespaces running in an OpenShift Container Platform cluster with key power consumption metrics, such as CPU or DRAM, measured at container level. These release notes track the development of power monitoring for Red Hat OpenShift in the OpenShift Container Platform. For an overview of the Power monitoring Operator, see About power monitoring . 1.1. Power monitoring 0.3 (Technology Preview) This release includes the following version updates: Kepler 0.7.12 Power monitoring Operator 0.15.0 The following advisory is available for power monitoring 0.3: RHEA-2024:9961 1.1.1. Bug fixes Before this update, the Power monitoring Operator dashboard used an invalid Prometheus rule, which caused the panel for OTHER Power Consumption(W) by Pods to display incorrect data. With this update, the rule is corrected, ensuring the dashboard now shows accurate power consumption data. 1.1.2. CVEs CVE-2023-37920 CVE-2024-2236 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-34397 1.2. Power monitoring 0.2 (Technology Preview) This release includes the following version updates: Kepler 0.7.10 Power monitoring Operator 0.13.0 The following advisory is available for power monitoring 0.2: RHEA-2024:2923 1.2.1. Features With this update, you can enable the Redfish API in Kepler. Kepler uses Redfish to get the power consumed by nodes. With this update, you can install the Power monitoring Operator in the namespace of your choice. With this update, you can access the power monitoring Overview dashboard from the Developer perspective. 1.2.2. CVEs CVE-2022-48554 CVE-2023-2975 CVE-2023-3446 CVE-2023-3817 CVE-2023-5678 CVE-2023-6129 CVE-2023-6237 CVE-2023-7008 CVE-2024-0727 CVE-2024-25062 CVE-2024-28834 CVE-2024-28835 1.3. Power monitoring 0.1 (Technology Preview) This release introduces a Technology Preview version of power monitoring for Red Hat OpenShift. The following advisory is available for power monitoring 0.1: RHEA-2024:0078 1.3.1. Features Deployment and deletion of Kepler Power usage metrics from Intel-based bare-metal deployments Dashboards for plotting power usage | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/power_monitoring/power-monitoring-release-notes-1 |
Chapter 1. Image APIs | Chapter 1. Image APIs 1.1. Image [image.openshift.io/v1] Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ImageSignature [image.openshift.io/v1] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ImageStreamImage [image.openshift.io/v1] Description ImageStreamImage represents an Image that is retrieved by image name from an ImageStream. User interfaces and regular users can use this resource to access the metadata details of a tagged image in the image stream history for viewing, since Image resources are not directly accessible to end users. A not found error will be returned if no such image is referenced by a tag within the ImageStream. Images are created when spec tags are set on an image stream that represent an image in an external registry, when pushing to the integrated registry, or when tagging an existing image from one image stream to another. The name of an image stream image is in the form "<STREAM>@<DIGEST>", where the digest is the content addressible identifier for the image (sha256:xxxxx... ). You can use ImageStreamImages as the from.kind of an image stream spec tag to reference an image exactly. The only operations supported on the imagestreamimage endpoint are retrieving the image. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. ImageStreamImport [image.openshift.io/v1] Description The image stream import resource provides an easy way for a user to find and import container images from other container image registries into the server. Individual images or an entire image repository may be imported, and users may choose to see the results of the import prior to tagging the resulting images into the specified image stream. This API is intended for end-user tools that need to see the metadata of the image prior to import (for instance, to generate an application from it). Clients that know the desired image can continue to create spec.tags directly into their image streams. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. ImageStreamLayers [image.openshift.io/v1] Description ImageStreamLayers describes information about the layers referenced by images in this image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. ImageStreamMapping [image.openshift.io/v1] Description ImageStreamMapping represents a mapping from a single image stream tag to a container image as well as the reference to the container image stream the image came from. This resource is used by privileged integrators to create an image resource and to associate it with an image stream in the status tags field. Creating an ImageStreamMapping will allow any user who can view the image stream to tag or pull that image, so only create mappings where the user has proven they have access to the image contents directly. The only operation supported for this resource is create and the metadata name and namespace should be set to the image stream containing the tag that should be updated. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. ImageStream [image.openshift.io/v1] Description An ImageStream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a container image repository on a registry. Users typically update the spec.tags field to point to external images which are imported from container registries using credentials in your namespace with the pull secret type, or to existing image stream tags and images which are immediately accessible for tagging or pulling. The history of images applied to a tag is visible in the status.tags field and any user who can view an image stream is allowed to tag that image into their own image streams. Access to pull images from the integrated registry is granted by having the "get imagestreams/layers" permission on a given image stream. Users may remove a tag by deleting the imagestreamtag resource, which causes both spec and status for that tag to be removed. Image stream history is retained until an administrator runs the prune operation, which removes references that are no longer in use. To preserve a historical image, ensure there is a tag in spec pointing to that image by its digest. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.8. ImageStreamTag [image.openshift.io/v1] Description ImageStreamTag represents an Image that is retrieved by tag name from an ImageStream. Use this resource to interact with the tags and images in an image stream by tag, or to see the image details for a particular tag. The image associated with this resource is the most recently successfully tagged, imported, or pushed image (as described in the image stream status.tags.items list for this tag). If an import is in progress or has failed the image will be shown. Deleting an image stream tag clears both the status and spec fields of an image stream. If no image can be retrieved for a given tag, a not found error will be returned. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. ImageTag [image.openshift.io/v1] Description ImageTag represents a single tag within an image stream and includes the spec, the status history, and the currently referenced image (if any) of the provided tag. This type replaces the ImageStreamTag by providing a full view of the tag. ImageTags are returned for every spec or status tag present on the image stream. If no tag exists in either form a not found error will be returned by the API. A create operation will succeed if no spec tag has already been defined and the spec field is set. Delete will remove both spec and status elements from the image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.10. SecretList [image.openshift.io/v1] Description SecretList is a list of Secret. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/image_apis/image-apis |
Chapter 10. Using Red Hat subscriptions in builds | Chapter 10. Using Red Hat subscriptions in builds Use the following sections to run entitled builds on OpenShift Container Platform. 10.1. Creating an image stream tag for the Red Hat Universal Base Image To use Red Hat subscriptions within a build, you create an image stream tag to reference the Universal Base Image (UBI). To make the UBI available in every project in the cluster, you add the image stream tag to the openshift namespace. Otherwise, to make it available in a specific project , you add the image stream tag to that project. The benefit of using image stream tags this way is that doing so grants access to the UBI based on the registry.redhat.io credentials in the install pull secret without exposing the pull secret to other users. This is more convenient than requiring each developer to install pull secrets with registry.redhat.io credentials in each project. Procedure To create an ImageStreamTag in the openshift namespace, so it is available to developers in all projects, enter: USD oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest -n openshift Tip You can alternatively apply the following YAML to create an ImageStreamTag in the openshift namespace: apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source To create an ImageStreamTag in a single project, enter: USD oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest Tip You can alternatively apply the following YAML to create an ImageStreamTag in a single project: apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source 10.2. Adding subscription entitlements as a build secret Builds that use Red Hat subscriptions to install content must include the entitlement keys as a build secret. Prerequisites You must have access to Red Hat entitlements through your subscription. The entitlement secret is automatically created by the Insights Operator. Tip When you perform an Entitlement Build using Red Hat Enterprise Linux (RHEL) 7, you must have the following instructions in your Dockerfile before you run any yum commands: RUN rm /etc/rhsm-host Procedure Add the etc-pki-entitlement secret as a build volume in the build configuration's Docker strategy: strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement 10.3. Running builds with Subscription Manager 10.3.1. Docker builds using Subscription Manager Docker strategy builds can use the Subscription Manager to install subscription content. Prerequisites The entitlement keys must be added as build strategy volumes. Procedure Use the following as an example Dockerfile to install content with the Subscription Manager: FROM registry.redhat.io/ubi9/ubi:latest RUN dnf search kernel-devel --showduplicates && \ dnf install -y kernel-devel 10.4. Running builds with Red Hat Satellite subscriptions 10.4.1. Adding Red Hat Satellite configurations to builds Builds that use Red Hat Satellite to install content must provide appropriate configurations to obtain content from Satellite repositories. Prerequisites You must provide or create a yum -compatible repository configuration file that downloads content from your Satellite instance. Sample repository configuration [test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem Procedure Create a ConfigMap containing the Satellite repository configuration file: USD oc create configmap yum-repos-d --from-file /path/to/satellite.repo Add the Satellite repository configuration and entitlement key as a build volumes: strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement 10.4.2. Docker builds using Red Hat Satellite subscriptions Docker strategy builds can use Red Hat Satellite repositories to install subscription content. Prerequisites You have added the entitlement keys and Satellite repository configurations as build volumes. Procedure Use the following as an example Dockerfile to install content with Satellite: FROM registry.redhat.io/ubi9/ubi:latest RUN dnf search kernel-devel --showduplicates && \ dnf install -y kernel-devel Additional resources How to use builds with Red Hat Satellite subscriptions and which certificate to use 10.5. Running entitled builds using SharedSecret objects You can configure and perform a build in one namespace that securely uses RHEL entitlements from a Secret object in another namespace. You can still access RHEL entitlements from OpenShift Builds by creating a Secret object with your subscription credentials in the same namespace as your Build object. However, now, in OpenShift Container Platform 4.10 and later, you can access your credentials and certificates from a Secret object in one of the OpenShift Container Platform system namespaces. You run entitled builds with a CSI volume mount of a SharedSecret custom resource (CR) instance that references the Secret object. This procedure relies on the newly introduced Shared Resources CSI Driver feature, which you can use to declare CSI Volume mounts in OpenShift Container Platform Builds. It also relies on the OpenShift Container Platform Insights Operator. Important The Shared Resources CSI Driver and The Build CSI Volumes are both Technology Preview features, which are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Shared Resources CSI Driver and the Build CSI Volumes features also belong to the TechPreviewNoUpgrade feature set, which is a subset of the current Technology Preview features. You can enable the TechPreviewNoUpgrade feature set on test clusters, where you can fully test them while leaving the features disabled on production clusters. Enabling this feature set cannot be undone and prevents minor version updates. This feature set is not recommended on production clusters. See "Enabling Technology Preview features using feature gates" in the following "Additional resources" section. Prerequisites You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. You have a SharedSecret custom resource (CR) instance that references the Secret object where the Insights Operator stores the subscription credentials. You must have permission to perform the following actions: Create build configs and start builds. Discover which SharedSecret CR instances are available by entering the oc get sharedsecrets command and getting a non-empty list back. Determine if the builder service account available to you in your namespace is allowed to use the given SharedSecret CR instance. In other words, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the builder service account in your namespace is listed. Note If neither of the last two prerequisites in this list are met, establish, or ask someone to establish, the necessary role-based access control (RBAC) so that you can discover SharedSecret CR instances and enable service accounts to use SharedSecret CR instances. Procedure Grant the builder service account RBAC permissions to use the SharedSecret CR instance by using oc apply with YAML content: Note Currently, kubectl and oc have hard-coded special case logic restricting the use verb to roles centered around pod security. Therefore, you cannot use oc create role ... to create the role needed for consuming SharedSecret CR instances. Example oc apply -f command with YAML Role object definition USD oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF Create the RoleBinding associated with the role by using the oc command: Example oc create rolebinding command USD oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder Create a BuildConfig object that accesses the RHEL entitlements. Example YAML BuildConfig object definition apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: my-csi-bc namespace: my-csi-app-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN ls -la /etc/pki/entitlement RUN rm /etc/rhsm-host RUN yum repolist --disablerepo=* RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms RUN yum -y update RUN yum install -y openshift-clients.x86_64 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: "/etc/pki/entitlement" name: my-csi-shared-secret source: csi: driver: csi.sharedresource.openshift.io readOnly: true volumeAttributes: sharedSecret: my-share-bc type: CSI Start a build from the BuildConfig object and follow the logs with the oc command. Example oc start-build command USD oc start-build my-csi-bc -F Example 10.1. Example output from the oc start-build command Note Some sections of the following output have been replaced with ... build.build.openshift.io/my-csi-bc-1 started Caching blobs under "/var/cache/blobs". Pulling image registry.redhat.io/ubi9/ubi:latest ... Trying to pull registry.redhat.io/ubi9/ubi:latest... Getting image source signatures Copying blob sha256:5dcbdc60ea6b60326f98e2b49d6ebcb7771df4b70c6297ddf2d7dede6692df6e Copying blob sha256:8671113e1c57d3106acaef2383f9bbfe1c45a26eacb03ec82786a494e15956c3 Copying config sha256:b81e86a2cb9a001916dc4697d7ed4777a60f757f0b8dcc2c4d8df42f2f7edb3a Writing manifest to image destination Storing signatures Adding transient rw bind mount for /run/secrets/rhsm STEP 1/9: FROM registry.redhat.io/ubi9/ubi:latest STEP 2/9: RUN ls -la /etc/pki/entitlement total 360 drwxrwxrwt. 2 root root 80 Feb 3 20:28 . drwxr-xr-x. 10 root root 154 Jan 27 15:53 .. -rw-r--r--. 1 root root 3243 Feb 3 20:28 entitlement-key.pem -rw-r--r--. 1 root root 362540 Feb 3 20:28 entitlement.pem time="2022-02-03T20:28:32Z" level=warning msg="Adding metacopy option, configured globally" --> 1ef7c6d8c1a STEP 3/9: RUN rm /etc/rhsm-host time="2022-02-03T20:28:33Z" level=warning msg="Adding metacopy option, configured globally" --> b1c61f88b39 STEP 4/9: RUN yum repolist --disablerepo=* Updating Subscription Management repositories. ... --> b067f1d63eb STEP 5/9: RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms Repository 'rhocp-4.9-for-rhel-8-x86_64-rpms' is enabled for this system. time="2022-02-03T20:28:40Z" level=warning msg="Adding metacopy option, configured globally" --> 03927607ebd STEP 6/9: RUN yum -y update Updating Subscription Management repositories. ... Upgraded: systemd-239-51.el8_5.3.x86_64 systemd-libs-239-51.el8_5.3.x86_64 systemd-pam-239-51.el8_5.3.x86_64 Installed: diffutils-3.6-6.el8.x86_64 libxkbcommon-0.9.1-1.el8.x86_64 xkeyboard-config-2.28-1.el8.noarch Complete! time="2022-02-03T20:29:05Z" level=warning msg="Adding metacopy option, configured globally" --> db57e92ff63 STEP 7/9: RUN yum install -y openshift-clients.x86_64 Updating Subscription Management repositories. ... Installed: bash-completion-1:2.7-5.el8.noarch libpkgconf-1.4.2-1.el8.x86_64 openshift-clients-4.9.0-202201211735.p0.g3f16530.assembly.stream.el8.x86_64 pkgconf-1.4.2-1.el8.x86_64 pkgconf-m4-1.4.2-1.el8.noarch pkgconf-pkg-config-1.4.2-1.el8.x86_64 Complete! time="2022-02-03T20:29:19Z" level=warning msg="Adding metacopy option, configured globally" --> 609507b059e STEP 8/9: ENV "OPENSHIFT_BUILD_NAME"="my-csi-bc-1" "OPENSHIFT_BUILD_NAMESPACE"="my-csi-app-namespace" --> cab2da3efc4 STEP 9/9: LABEL "io.openshift.build.name"="my-csi-bc-1" "io.openshift.build.namespace"="my-csi-app-namespace" COMMIT temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca --> 821b582320b Successfully tagged temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca 821b582320b41f1d7bab4001395133f86fa9cc99cc0b2b64c5a53f2b6750db91 Build complete, no image push requested 10.6. Additional resources Importing simple content access certificates with Insights Operator Enabling features using feature gates Managing image streams build strategy | [
"oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest -n openshift",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source",
"oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source",
"RUN rm /etc/rhsm-host",
"strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement",
"FROM registry.redhat.io/ubi9/ubi:latest RUN dnf search kernel-devel --showduplicates && dnf install -y kernel-devel",
"[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem",
"oc create configmap yum-repos-d --from-file /path/to/satellite.repo",
"strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement",
"FROM registry.redhat.io/ubi9/ubi:latest RUN dnf search kernel-devel --showduplicates && dnf install -y kernel-devel",
"oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF",
"oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: my-csi-bc namespace: my-csi-app-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN ls -la /etc/pki/entitlement RUN rm /etc/rhsm-host RUN yum repolist --disablerepo=* RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms RUN yum -y update RUN yum install -y openshift-clients.x86_64 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: \"/etc/pki/entitlement\" name: my-csi-shared-secret source: csi: driver: csi.sharedresource.openshift.io readOnly: true volumeAttributes: sharedSecret: my-share-bc type: CSI",
"oc start-build my-csi-bc -F",
"build.build.openshift.io/my-csi-bc-1 started Caching blobs under \"/var/cache/blobs\". Pulling image registry.redhat.io/ubi9/ubi:latest Trying to pull registry.redhat.io/ubi9/ubi:latest Getting image source signatures Copying blob sha256:5dcbdc60ea6b60326f98e2b49d6ebcb7771df4b70c6297ddf2d7dede6692df6e Copying blob sha256:8671113e1c57d3106acaef2383f9bbfe1c45a26eacb03ec82786a494e15956c3 Copying config sha256:b81e86a2cb9a001916dc4697d7ed4777a60f757f0b8dcc2c4d8df42f2f7edb3a Writing manifest to image destination Storing signatures Adding transient rw bind mount for /run/secrets/rhsm STEP 1/9: FROM registry.redhat.io/ubi9/ubi:latest STEP 2/9: RUN ls -la /etc/pki/entitlement total 360 drwxrwxrwt. 2 root root 80 Feb 3 20:28 . drwxr-xr-x. 10 root root 154 Jan 27 15:53 .. -rw-r--r--. 1 root root 3243 Feb 3 20:28 entitlement-key.pem -rw-r--r--. 1 root root 362540 Feb 3 20:28 entitlement.pem time=\"2022-02-03T20:28:32Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 1ef7c6d8c1a STEP 3/9: RUN rm /etc/rhsm-host time=\"2022-02-03T20:28:33Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> b1c61f88b39 STEP 4/9: RUN yum repolist --disablerepo=* Updating Subscription Management repositories. --> b067f1d63eb STEP 5/9: RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms Repository 'rhocp-4.9-for-rhel-8-x86_64-rpms' is enabled for this system. time=\"2022-02-03T20:28:40Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 03927607ebd STEP 6/9: RUN yum -y update Updating Subscription Management repositories. Upgraded: systemd-239-51.el8_5.3.x86_64 systemd-libs-239-51.el8_5.3.x86_64 systemd-pam-239-51.el8_5.3.x86_64 Installed: diffutils-3.6-6.el8.x86_64 libxkbcommon-0.9.1-1.el8.x86_64 xkeyboard-config-2.28-1.el8.noarch Complete! time=\"2022-02-03T20:29:05Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> db57e92ff63 STEP 7/9: RUN yum install -y openshift-clients.x86_64 Updating Subscription Management repositories. Installed: bash-completion-1:2.7-5.el8.noarch libpkgconf-1.4.2-1.el8.x86_64 openshift-clients-4.9.0-202201211735.p0.g3f16530.assembly.stream.el8.x86_64 pkgconf-1.4.2-1.el8.x86_64 pkgconf-m4-1.4.2-1.el8.noarch pkgconf-pkg-config-1.4.2-1.el8.x86_64 Complete! time=\"2022-02-03T20:29:19Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 609507b059e STEP 8/9: ENV \"OPENSHIFT_BUILD_NAME\"=\"my-csi-bc-1\" \"OPENSHIFT_BUILD_NAMESPACE\"=\"my-csi-app-namespace\" --> cab2da3efc4 STEP 9/9: LABEL \"io.openshift.build.name\"=\"my-csi-bc-1\" \"io.openshift.build.namespace\"=\"my-csi-app-namespace\" COMMIT temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca --> 821b582320b Successfully tagged temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca 821b582320b41f1d7bab4001395133f86fa9cc99cc0b2b64c5a53f2b6750db91 Build complete, no image push requested"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/builds/running-entitled-builds |
7.4 Release Notes | 7.4 Release Notes Red Hat Enterprise Linux 7 Release Notes for Red Hat Enterprise Linux 7.4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/index |
Chapter 7. Red Hat Quay sizing and subscriptions | Chapter 7. Red Hat Quay sizing and subscriptions Scalability of Red Hat Quay is one of its key strengths, with a single code base supporting a broad spectrum of deployment sizes, including the following: Proof of Concept deployment on a single development machine Mid-size deployment of approximately 2,000 users that can serve content to dozens of Kubernetes clusters High-end deployment such as Quay.io that can serve thousands of Kubernetes clusters world-wide Since sizing heavily depends on a multitude of factors, such as the number of users, images, concurrent pulls and pushes, there are no standard sizing recommendations. The following are the minimum requirements for systems running Red Hat Quay (per container/pod instance): Quay: minimum 6 GB; recommended 8 GB, 2 more more vCPUs Clair: recommended 2 GB RAM and 2 or more vCPUs Storage: : recommended 30 GB NooBaa: minimum 2 GB, 1 vCPU (when objectstorage component is selected by the Operator) Clair database: minimum 5 GB required for security metadata Stateless components of Red Hat Quay can be scaled out, but this will cause a heavier load on stateful backend services. 7.1. Red Hat Quay sample sizings The following table shows approximate sizing for Proof of Concept, mid-size, and high-end deployments. Whether a deployment runs appropriately with the same metrics depends on many factors not shown below. Metric Proof of concept Mid-size High End (Quay.io) No. of Quay containers by default 1 4 15 No. of Quay containers max at scale-out N/A 8 30 No. of Clair containers by default 1 3 10 No. of Clair containers max at scale-out N/A 6 15 No. of mirroring pods (to mirror 100 repositories) 1 5-10 N/A Database sizing 2 -4 Cores 6-8 GB RAM 10-20 GB disk 4-8 Cores 6-32 GB RAM 100 GB - 1 TB disk 32 cores 244 GB 1+ TB disk Object storage backend sizing 10-100 GB 1 - 20 TB 50+ TB up to PB Redis cache sizing 2 Cores 2-4 GB RAM 4 cores 28 GB RAM Underlying node sizing (physical or virtual) 4 Cores 8 GB RAM 4-6 Cores 12-16 GB RAM Quay: 13 cores 56GB RAM Clair: 2 cores 4 GB RAM For further details on sizing & related recommendations for mirroring, see the section on repository mirroring . The sizing for the Redis cache is only relevant if you use Quay builders, otherwise it is not significant. 7.2. Red Hat Quay subscription information Red Hat Quay is available with Standard or Premium support, and subscriptions are based on deployments. Note Deployment means an installation of a single Red Hat Quay registry using a shared data backend. With a Red Hat Quay subscription, the following options are available: There is no limit on the number of pods, such as Quay, Clair, Builder, and so on, that you can deploy. Red Hat Quay pods can run in multiple data centers or availability zones. Storage and database backends can be deployed across multiple data centers or availability zones, but only as a single, shared storage backend and single, shared database backend. Red Hat Quay can manage content for an unlimited number of clusters or standalone servers. Clients can access the Red Hat Quay deployment regardless of their physical location. You can deploy Red Hat Quay on OpenShift Container Platform infrastructure nodes to minimize subscription requirements. You can run the Container Security Operator (CSO) and the Quay Bridge Operator (QBO) on your OpenShift Container Platform clusters at no additional cost. Note Red Hat Quay geo-replication requires a subscription for each storage replication. The database, however, is shared. For more information about purchasing a Red Hat Quay subscription, see Red Hat Quay . 7.3. Using Red Hat Quay with or without internal registry Red Hat Quay can be used as an external registry in front of multiple OpenShift Container Platform clusters with their internal registries. Red Hat Quay can also be used in place of the internal registry when it comes to automating builds and deployment rollouts. The required coordination of Secrets and ImageStreams is automated by the Quay Bridge Operator, which can be launched from the OperatorHub for OpenShift Container Platform. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_architecture/sizing-intro |
Configuring Capsules with a load balancer | Configuring Capsules with a load balancer Red Hat Satellite 6.15 Distribute load among Capsules Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/configuring_capsules_with_a_load_balancer/index |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.