title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 12. System Management
Chapter 12. System Management Abstract The system management patterns describe how to monitor, test, and administer a messaging system. 12.1. Detour Detour The Detour from the Chapter 3, Introducing Enterprise Integration Patterns allows you to send messages through additional steps if a control condition is met. It can be useful for turning on extra validation, testing, debugging code when needed. Example In this example we essentially have a route like from("direct:start").to("mock:result") with a conditional detour to the mock:detour endpoint in the middle of the route.. Using the Spring XML Extensions whether the detour is turned on or off is decided by the ControlBean . So, when the detour is on the message is routed to mock:detour and then mock:result . When the detour is off, the message is routed to mock:result . For full details, check the example source here: camel-core/src/test/java/org/apache/camel/processor/DetourTest.java 12.2. LogEIP Overview Apache Camel provides several ways to perform logging in a route: Using the log DSL command. Using the Log component, which can log the message content. Using the Tracer, which traces message flow. Using a Processor or a Bean endpoint to perform logging in Java. Difference between the log DSL command and the log component The log DSL is much lighter and meant for logging human logs such as Starting to do ... . It can only log a message based on the Simple language. In contrast, the Log component is a fully featured logging component. The Log component is capable of logging the message itself and you have many URI options to control the logging. Java DSL example Since Apache Camel 2.2 , you can use the log DSL command to construct a log message at run time using the Simple expression language. For example, you can create a log message within a route, as follows: This route constructs a String format message at run time. The log message will by logged at INFO level, using the route ID as the log name. By default, routes are named consecutively, route-1 , route-2 and so on. But you can use the DSL command, routeId("myCoolRoute") , to specify a custom route ID. The log DSL also provides variants that enable you to set the logging level and the log name explicitly. For example, to set the logging level explicitly to LoggingLevel.DEBUG , you can invoke the log DSL as follows: The log DSL has overloaded methods to set the logging level and/or name as well. To set the log name to fileRoute , you can invoke the log DSL as follows: XML DSL example In XML DSL, the log DSL is represented by the log element and the log message is specified by setting the message attribute to a Simple expression, as follows: The log element supports the message , loggingLevel and logName attributes. For example: Global Log Name The route ID is used as the the default log name. Since Apache Camel 2.17 the log name can be changed by configuring a logname parameter. Java DSL, configure the log name based on the following example: In XML, configure the log name in the following way: If you have more than one log and you want to have the same log name on all of them, you must add the configuration to each log. 12.3. Wire Tap Wire Tap The wire tap pattern, as shown in Figure 12.1, "Wire Tap Pattern" , enables you to route a copy of the message to a separate tap location, while the original message is forwarded to the ultimate destination. Figure 12.1. Wire Tap Pattern Streams If you WireTap a stream message body, you should consider enabling Stream Caching to ensure the message body can be re-read. See more details at Stream Caching WireTap node Apache Camel 2.0 introduces the wireTap node for doing wire taps. The wireTap node copies the original exchange to a tapped exchange, whose exchange pattern is set to InOnly , because the tapped exchange should be propagated in a oneway style. The tapped exchange is processed in a separate thread, so that it can run concurrently with the main route. The wireTap supports two different approaches to tapping an exchange: Tap a copy of the original exchange. Tap a new exchange instance, enabling you to customize the tapped exchange. Note From Camel 2.16, the Wire Tap EIP emits event notifications when you send the exchange to the wire tap destination. Note As of Camel 2.20 , the Wire Tap EIP will complete any inflight wire tapped exchanges while shutting down. Tap a copy of the original exchange Using the Java DSL: Using Spring XML extensions: Tap and modify a copy of the original exchange Using the Java DSL, Apache Camel supports using either a processor or an expression to modify a copy of the original exchange. Using a processor gives you full power over how the exchange is populated, because you can set properties, headers and so on. The expression approach can only be used to modify the In message body. For example, to modify a copy of the original exchange using the processor approach: And to modify a copy of the original exchange using the expression approach: Using the Spring XML extensions, you can modify a copy of the original exchange using the processor approach, where the processorRef attribute references a spring bean with the myProcessor ID: And to modify a copy of the original exchange using the expression approach: Tap a new exchange instance You can define a wiretap with a new exchange instance by setting the copy flag to false (the default is true ). In this case, an initially empty exchange is created for the wiretap. For example, to create a new exchange instance using the processor approach: Where the second wireTap argument sets the copy flag to false , indicating that the original exchange is not copied and an empty exchange is created instead. To create a new exchange instance using the expression approach: Using the Spring XML extensions, you can indicate that a new exchange is to be created by setting the wireTap element's copy attribute to false . To create a new exchange instance using the processor approach, where the processorRef attribute references a spring bean with the myProcessor ID, as follows: And to create a new exchange instance using the expression approach: Sending a new Exchange and set headers in DSL Available as of Camel 2.8 If you send a new messages using the Section 12.3, "Wire Tap" then you could only set the message body using an Part II, "Routing Expression and Predicate Languages" from the DSL. If you also need to set new headers you would have to use a Section 1.5, "Processors" for that. So in Camel 2.8 onwards we have improved this situation so you can now set headers as well in the DSL. The following example sends a new message which has "Bye World" as message body a header with key "id" with the value 123 a header with key "date" which has current date as value Java DSL XML DSL The XML DSL is slightly different than Java DSL as how you configure the message body and headers. In XML you use <body> and <setHeader> as shown: Using URIs Wire Tap supports static and dynamic endpoint URIs. Static endpoint URIs are available as of Camel 2.20 . The following example displays how to wire tap to a JMS queue where the header ID is a part of the queue name. For more information about dynamic endpoint URIs, see the section called "Dynamic To" . Using onPrepare to execute custom logic when preparing messages Available as of Camel 2.8 For details, see Section 8.13, "Multicast" . Options The wireTap DSL command supports the following options: Name Default Value Description uri The endpoint uri where to send the wire tapped message. You should use either uri or ref . ref Refers to the endpoint where to send the wire tapped message. You should use either uri or ref . executorServiceRef Refers to a custom Section 2.8, "Threading Model" to be used when processing the wire tapped messages. If not set then Camel uses a default thread pool. processorRef Refers to a custom Section 1.5, "Processors" to be used for creating a new message (eg the send a new message mode). See below. copy true Camel 2.3 : Should a copy of the the section called "Exchanges" to used when wire tapping the message. onPrepareRef Camel 2.8: Refers to a custom Section 1.5, "Processors" to prepare the copy of the the section called "Exchanges" to be wire tapped. This allows you to do any custom logic, such as deep-cloning the message payload if that's needed etc.
[ "from(\"direct:start\").choice() .when().method(\"controlBean\", \"isDetour\").to(\"mock:detour\").end() .to(\"mock:result\");", "<route> <from uri=\"direct:start\"/> <choice> <when> <method bean=\"controlBean\" method=\"isDetour\"/> <to uri=\"mock:detour\"/> </when> </choice> <to uri=\"mock:result\"/> </split> </route>", "from(\"direct:start\").log(\"Processing USD{id}\").to(\"bean:foo\");", "from(\"direct:start\").log(LoggingLevel.DEBUG, \"Processing USD{id}\").to(\"bean:foo\");", "from(\"file://target/files\").log(LoggingLevel.DEBUG, \"fileRoute\", \"Processing file USD{file:name}\").to(\"bean:foo\");", "<route id=\"foo\"> <from uri=\"direct:foo\"/> <log message=\"Got USD{body}\"/> <to uri=\"mock:foo\"/> </route>", "<route id=\"baz\"> <from uri=\"direct:baz\"/> <log message=\"Me Got USD{body}\" loggingLevel=\"FATAL\" logName=\"cool\"/> <to uri=\"mock:baz\"/> </route>", "CamelContext context = context.getProperties().put(Exchange.LOG_EIP_NAME, \"com.foo.myapp\");", "<camelContext ...> <properties> <property key=\"CamelLogEipName\" value=\"com.foo.myapp\"/> </properties>", "from(\"direct:start\") .to(\"log:foo\") .wireTap(\"direct:tap\") .to(\"mock:result\");", "<route> <from uri=\"direct:start\"/> <to uri=\"log:foo\"/> <wireTap uri=\"direct:tap\"/> <to uri=\"mock:result\"/> </route>", "from(\"direct:start\") .wireTap(\"direct:foo\", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(\"foo\", \"bar\"); } }).to(\"mock:result\"); from(\"direct:foo\").to(\"mock:foo\");", "from(\"direct:start\") .wireTap(\"direct:foo\", constant(\"Bye World\")) .to(\"mock:result\"); from(\"direct:foo\").to(\"mock:foo\");", "<route> <from uri=\"direct:start2\"/> <wireTap uri=\"direct:foo\" processorRef=\"myProcessor\"/> <to uri=\"mock:result\"/> </route>", "<route> <from uri=\"direct:start\"/> <wireTap uri=\"direct:foo\"> <body><constant>Bye World</constant></body> </wireTap> <to uri=\"mock:result\"/> </route>", "from(\"direct:start\") .wireTap(\"direct:foo\", false, new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setBody(\"Bye World\"); exchange.getIn().setHeader(\"foo\", \"bar\"); } }).to(\"mock:result\"); from(\"direct:foo\").to(\"mock:foo\");", "from(\"direct:start\") .wireTap(\"direct:foo\", false, constant(\"Bye World\")) .to(\"mock:result\"); from(\"direct:foo\").to(\"mock:foo\");", "<route> <from uri=\"direct:start2\"/> <wireTap uri=\"direct:foo\" processorRef=\"myProcessor\" copy=\"false\"/> <to uri=\"mock:result\"/> </route>", "<route> <from uri=\"direct:start\"/> <wireTap uri=\"direct:foo\" copy=\"false\"> <body><constant>Bye World</constant></body> </wireTap> <to uri=\"mock:result\"/> </route>", "from(\"direct:start\") // tap a new message and send it to direct:tap // the new message should be Bye World with 2 headers .wireTap(\"direct:tap\") // create the new tap message body and headers .newExchangeBody(constant(\"Bye World\")) .newExchangeHeader(\"id\", constant(123)) .newExchangeHeader(\"date\", simple(\"USD{date:now:yyyyMMdd}\")) .end() // here we continue routing the original messages .to(\"mock:result\"); // this is the tapped route from(\"direct:tap\") .to(\"mock:tap\");", "<route> <from uri=\"direct:start\"/> <!-- tap a new message and send it to direct:tap --> <!-- the new message should be Bye World with 2 headers --> <wireTap uri=\"direct:tap\"> <!-- create the new tap message body and headers --> <body><constant>Bye World</constant></body> <setHeader headerName=\"id\"><constant>123</constant></setHeader> <setHeader headerName=\"date\"><simple>USD{date:now:yyyyMMdd}</simple></setHeader> </wireTap> <!-- here we continue routing the original message --> <to uri=\"mock:result\"/> </route>", "from(\"direct:start\") .wireTap(\"jms:queue:backup-USD{header.id}\") .to(\"bean:doSomething\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/sysman
Chapter 2. Configuring provisioning resources
Chapter 2. Configuring provisioning resources 2.1. Provisioning contexts A provisioning context is the combination of an organization and location that you specify for Satellite components. The organization and location that a component belongs to sets the ownership and access for that component. Organizations divide Red Hat Satellite components into logical groups based on ownership, purpose, content, security level, and other divisions. You can create and manage multiple organizations through Red Hat Satellite and assign components to each individual organization. This ensures Satellite Server provisions hosts within a certain organization and only uses components that are assigned to that organization. For more information about organizations, see Managing Organizations in Administering Red Hat Satellite . Locations function similar to organizations. The difference is that locations are based on physical or geographical setting. Users can nest locations in a hierarchy. For more information about locations, see Managing Locations in Administering Red Hat Satellite . 2.2. Setting the provisioning context When you set a provisioning context, you define which organization and location to use for provisioning hosts. The organization and location menus are located in the menu bar, on the upper left of the Satellite web UI. If you have not selected an organization and location to use, the menu displays: Any Organization and Any Location . Procedure Click Any Organization and select the organization. Click Any Location and select the location to use. Each user can set their default provisioning context in their account settings. Click the user name in the upper right of the Satellite web UI and select My account to edit your user account settings. CLI procedure When using the CLI, include either --organization or --organization-label and --location or --location-id as an option. For example: This command outputs hosts allocated to My_Organization and My_Location . 2.3. Creating operating systems An operating system is a collection of resources that define how Satellite Server installs a base operating system on a host. Operating system entries combine previously defined resources, such as installation media, partition tables, provisioning templates, and others. Importing operating systems from Red Hat's CDN creates new entries on the Hosts > Provisioning Setup > Operating Systems page. To import operating systems from Red Hat's CDN, enable the Red Hat repositories of the operating systems and synchronize the repositories to Satellite. For more information, see Enabling Red Hat Repositories and Synchronizing Repositories in Managing content . You can also add custom operating systems using the following procedure. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Operating systems and click New Operating system. In the Name field, enter a name to represent the operating system entry. In the Major field, enter the number that corresponds to the major version of the operating system. In the Minor field, enter the number that corresponds to the minor version of the operating system. In the Description field, enter a description of the operating system. From the Family list, select the operating system's family. From the Root Password Hash list, select the encoding method for the root password. From the Architectures list, select the architectures that the operating system uses. Click the Partition table tab and select the possible partition tables that apply to this operating system. Optional: If you use non-Red Hat content, click the Installation Media tab and select the installation media that apply to this operating system. For more information, see Adding Installation Media to Satellite . Click the Templates tab and select a PXELinux template , a Provisioning template , and a Finish template for your operating system to use. You can select other templates, for example an iPXE template , if you plan to use iPXE for provisioning. Click Submit to save your provisioning template. CLI procedure Create the operating system using the hammer os create command: 2.4. Updating the details of multiple operating systems Use this procedure to update the details of multiple operating systems. This example shows you how to assign each operating system a partition table called Kickstart default , a configuration template called Kickstart default PXELinux , and a provisioning template called Kickstart Default . Procedure On Satellite Server, run the following Bash script: PARTID=USD(hammer --csv partition-table list | grep "Kickstart default," | cut -d, -f1) PXEID=USD(hammer --csv template list --per-page=1000 | grep "Kickstart default PXELinux" | cut -d, -f1) SATID=USD(hammer --csv template list --per-page=1000 | grep "provision" | grep ",Kickstart default" | cut -d, -f1) for i in USD(hammer --no-headers --csv os list | awk -F, {'print USD1'}) do hammer partition-table add-operatingsystem --id="USD{PARTID}" --operatingsystem-id="USD{i}" hammer template add-operatingsystem --id="USD{PXEID}" --operatingsystem-id="USD{i}" hammer os set-default-template --id="USD{i}" --config-template-id=USD{PXEID} hammer os add-config-template --id="USD{i}" --config-template-id=USD{SATID} hammer os set-default-template --id="USD{i}" --config-template-id=USD{SATID} done Display information about the updated operating system to verify that the operating system is updated correctly: 2.5. Creating architectures An architecture in Satellite represents a logical grouping of hosts and operating systems. Architectures are created by Satellite automatically when hosts check in with Puppet. The x86_64 architecture is already preset in Satellite. Use this procedure to create an architecture in Satellite. Supported architectures Only Intel x86_64 architecture is supported for provisioning using PXE, Discovery, and boot disk. For more information, see the Red Hat Knowledgebase solution Supported architectures and provisioning scenarios in Satellite 6 . Procedure In the Satellite web UI, navigate to Hosts > Provisioning Setup > Architectures . Click Create Architecture . In the Name field, enter a name for the architecture. From the Operating Systems list, select an operating system. If none are available, you can create and assign them under Hosts > Provisioning Setup > Operating Systems . Click Submit . CLI procedure Enter the hammer architecture create command to create an architecture. Specify its name and operating systems that include this architecture: 2.6. Creating hardware models Use this procedure to create a hardware model in Satellite so that you can specify which hardware model a host uses. Procedure In the Satellite web UI, navigate to Hosts > Provisioning Setup > Hardware Models . Click Create Model . In the Name field, enter a name for the hardware model. Optionally, in the Hardware Model and Vendor Class fields, you can enter corresponding information for your system. In the Info field, enter a description of the hardware model. Click Submit to save your hardware model. CLI procedure Create a hardware model using the hammer model create command. The only required parameter is --name . Optionally, enter the hardware model with the --hardware-model option, a vendor class with the --vendor-class option, and a description with the --info option: 2.7. Using a synchronized Kickstart repository for a host's operating system Satellite contains a set of synchronized Kickstart repositories that you use to install the provisioned host's operating system. For more information about adding repositories, see Syncing Repositories in Managing content . Use this procedure to set up a Kickstart repository. Prerequisites You must enable both BaseOS and Appstream Kickstart before provisioning. Procedure Add the synchronized Kickstart repository that you want to use to the existing content view, or create a new content view and add the kickstart repository. For Red Hat Enterprise Linux 8, ensure that you add both Red Hat Enterprise Linux 8 for x86_64 - AppStream Kickstart x86_64 8 and Red Hat Enterprise Linux 8 for x86_64 - BaseOS Kickstart x86_64 8 repositories. If you use a disconnected environment, you must import the Kickstart repositories from a Red Hat Enterprise Linux binary DVD. For more information, see Importing Kickstart Repositories in Managing content . Publish a new version of the content view where the Kickstart repository is added and promote it to a required lifecycle environment. For more information, see Managing content views in Managing content . When you create a host, in the Operating System tab, for Media Selection , select the Synced Content checkbox. To view the Kickstart tree, enter the following command: 2.8. Adding installation media to Satellite Installation media are sources of packages that Satellite Server uses to install a base operating system on a machine from an external repository. You can use this parameter to install third-party content. Red Hat content is delivered through repository syncing instead. You can view installation media by navigating to Hosts > Provisioning Setup > Installation Media . Installation media must be in the format of an operating system installation tree and must be accessible from the machine hosting the installer through an HTTP URL. By default, Satellite includes installation media for some official Linux distributions. Note that some of those installation media are targeted for a specific version of an operating system. For example CentOS mirror (7.x) must be used for CentOS 7 or earlier, and CentOS mirror (8.x) must be used for CentOS 8 or later. If you want to improve download performance when using installation media to install operating systems on multiple hosts, you must modify the Path of the installation medium to point to the closest mirror or a local copy. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Provisioning Setup > Installation Media . Click Create Medium . In the Name field, enter a name to represent the installation media entry. In the Path enter the URL that contains the installation tree. You can use following variables in the path to represent multiple different system architectures and versions: USDarch - The system architecture. USDversion - The operating system version. USDmajor - The operating system major version. USDminor - The operating system minor version. Example HTTP path: From the Operating system family list, select the distribution or family of the installation medium. For example, CentOS and Fedora are in the Red Hat family. Click the Organizations and Locations tabs, to change the provisioning context. Satellite Server adds the installation medium to the set provisioning context. Click Submit to save your installation medium. CLI procedure Create the installation medium using the hammer medium create command: 2.9. Creating partition tables A partition table is a type of template that defines the way Satellite Server configures the disks available on a new host. A Partition table uses the same ERB syntax as provisioning templates. Red Hat Satellite contains a set of default partition tables to use, including a Kickstart default . You can also edit partition table entries to configure the preferred partitioning scheme, or create a partition table entry and add it to the operating system entry. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Templates > Partition Tables . Click Create Partition Table . In the Name field, enter a name for the partition table. Select the Default checkbox if you want to set the template to automatically associate with new organizations or locations. Select the Snippet checkbox if you want to identify the template as a reusable snippet for other partition tables. From the Operating System Family list, select the distribution or family of the partitioning layout. For example, Red Hat Enterprise Linux, CentOS, and Fedora are in the Red Hat family. In the Template editor field, enter the layout for the disk partition. The format of the layout must match that for the intended operating system. For example, Red Hat Enterprise Linux requires a layout that matches a Kickstart file, such as: For more information, see Section 2.10, "Dynamic partition example" . You can also use the file browser in the template editor to import the layout from a file. In the Audit Comment field, add a summary of changes to the partition layout. Click the Organizations and Locations tabs to add any other provisioning contexts that you want to associate with the partition table. Satellite adds the partition table to the current provisioning context. Click Submit to save your partition table. CLI procedure Create a plain text file, such as ~/My_Partition_Table , that contains the partition layout. The format of the layout must match that for the intended operating system. For example, Red Hat Enterprise Linux requires a layout that matches a Kickstart file, such as: For more information, see Section 2.10, "Dynamic partition example" . Create the installation medium using the hammer partition-table create command: 2.10. Dynamic partition example Using an Anaconda kickstart template, the following section instructs Anaconda to erase the whole disk, automatically partition, enlarge one partition to maximum size, and then proceed to the sequence of events in the provisioning process: Dynamic partitioning is executed by the installation program. Therefore, you can write your own rules to specify how you want to partition disks according to runtime information from the node, for example, disk sizes, number of drives, vendor, or manufacturer. If you want to provision servers and use dynamic partitioning, add the following example as a template. When the #Dynamic entry is included, the content of the template loads into a %pre shell scriplet and creates a /tmp/diskpart.cfg that is then included into the Kickstart partitioning section. 2.11. Provisioning templates A provisioning template defines the way Satellite Server installs an operating system on a host. Red Hat Satellite includes many template examples. In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates to view them. You can create a template or clone a template and edit the clone. For help with templates, navigate to Hosts > Templates > Provisioning Templates > Create Template > Help . Templates supported by Red Hat are indicated by a Red Hat icon. To hide unsupported templates, in the Satellite web UI navigate to Administer > Settings . On the Provisioning tab, set the value of Show unsupported provisioning templates to false and click Submit . You can also filter out the supported templates by making the following query "supported = true". If you clone a supported template, the cloned template will be unsupported. Templates accept the Embedded Ruby (ERB) syntax. For more information, see Template Writing Reference in Managing hosts . You can download provisioning templates. Before you can download the template, you must create a debug certificate. For more information, see Creating an Organization Debug Certificate in Administering Red Hat Satellite . You can synchronize templates between Satellite Server and a Git repository or a local directory. For more information, see Synchronizing Templates Repositories in Managing hosts . To view the history of changes applied to a template, navigate to Hosts > Templates > Provisioning Templates , select one of the templates, and click History . Click Revert to override the content with the version. You can also revert to an earlier change. Click Show Diff to see information about a specific change: The Template Diff tab displays changes in the body of a provisioning template. The Details tab displays changes in the template description. The History tab displays the user who made a change to the template and date of the change. 2.12. Kinds of provisioning templates There are various kinds of provisioning templates: Provision The main template for the provisioning process. For example, a Kickstart template. For more information about Kickstart syntax and commands, see the following resources: Automated installation workflow in Automatically installing RHEL 9 Automated installation workflow in Automatically installing RHEL 8 Kickstart Syntax Reference in the Red Hat Enterprise Linux 7 Installation Guide PXELinux, PXEGrub, PXEGrub2 PXE-based templates that deploy to the template Capsule associated with a subnet to ensure that the host uses the installer with the correct kernel options. For BIOS provisioning, select PXELinux template. For UEFI provisioning, select PXEGrub2 . Finish Post-configuration scripts to execute using an SSH connection when the main provisioning process completes. You can use Finish templates only for image-based provisioning in virtual or cloud environments that do not support user_data. Do not confuse an image with a foreman discovery ISO, which is sometimes called a Foreman discovery image. An image in this context is an install image in a virtualized environment for easy deployment. When a finish script successfully exits with the return code 0 , Red Hat Satellite treats the code as a success and the host exits the build mode. Note that there are a few finish scripts with a build mode that uses a call back HTTP call. These scripts are not used for image-based provisioning, but for post configuration of operating-system installations such as Debian, Ubuntu, and BSD. Red Hat does not support provisioning of operating systems other than Red Hat Enterprise Linux. user_data Post-configuration scripts for providers that accept custom data, also known as seed data. You can use the user_data template to provision virtual machines in cloud or virtualised environments only. This template does not require Satellite to be able to reach the host; the cloud or virtualization platform is responsible for delivering the data to the image. Ensure that the image that you want to provision has the software to read the data installed and set to start during boot. For example, cloud-init , which expects YAML input, or ignition , which expects JSON input. cloud_init Some environments, such as VMWare, either do not support custom data or have their own data format that limits what can be done during customization. In this case, you can configure a cloud-init client with the foreman plugin, which attempts to download the template directly from Satellite over HTTP or HTTPS. This technique can be used in any environment, preferably virtualized. Ensure that you meet the following requirements to use the cloud_init template: Ensure that the image that you want to provision has the software to read the data installed and set to start during boot. A provisioned host is able to reach Satellite from the IP address that matches the host's provisioning interface IP. Note that cloud-init does not work behind NAT. Bootdisk Templates for PXE-less boot methods. Kernel Execution (kexec) Kernel execution templates for PXE-less boot methods. Note Kernel Execution is a Technology Preview feature. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. Script An arbitrary script not used by default but useful for custom tasks. ZTP Zero Touch Provisioning templates. POAP PowerOn Auto Provisioning templates. iPXE Templates for iPXE or gPXE environments to use instead of PXELinux. 2.13. Creating provisioning templates A provisioning template defines the way Satellite Server installs an operating system on a host. Use this procedure to create a new provisioning template. Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates and click Create Template . In the Name field, enter a name for the provisioning template. Fill in the rest of the fields as required. The Help tab provides information about the template syntax and details the available functions, variables, and methods that can be called on different types of objects within the template. CLI procedure Before you create a template with the CLI, create a plain text file that contains the template. This example uses the ~/my-template file. Create the template using the hammer template create command and specify the type with the --type option: 2.14. Cloning provisioning templates A provisioning template defines the way Satellite Server installs an operating system on a host. Use this procedure to clone a template and add your updates to the clone. Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Find the template that you want to use. Click Clone to duplicate the template. In the Name field, enter a name for the provisioning template. Select the Default checkbox to set the template to associate automatically with new organizations or locations. In the Template editor field, enter the body of the provisioning template. You can also use the Template file browser to upload a template file. In the Audit Comment field, enter a summary of changes to the provisioning template for auditing purposes. Click the Type tab and if your template is a snippet, select the Snippet checkbox. A snippet is not a standalone provisioning template, but a part of a provisioning template that can be inserted into other provisioning templates. From the Type list, select the type of the template. For example, Provisioning template . Click the Association tab and from the Applicable Operating Systems list, select the names of the operating systems that you want to associate with the provisioning template. Optionally, click Add combination and select a host group from the Host Group list or an environment from the Environment list to associate provisioning template with the host groups and environments. Click the Organizations and Locations tabs to add any additional contexts to the template. Click Submit to save your provisioning template. 2.15. Creating custom provisioning snippets Custom provisioning snippets allow you to execute custom code during host provisioning. You can run code before and/or after the provisioning process. Prerequisites Depending on your provisioning template, multiple custom snippet hooks exist which you can use to include custom provisioning snippets. Ensure that you check your provisioning template first to verify which custom snippets you can use. Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates and click Create Template . In the Name field, enter a name for your custom provisioning snippet. The name has to start with the name of a provisioning template that supports including custom provisioning snippets: Append ` custom pre` to the name of a provisioning template to run code before provisioning a host. Append ` custom post` to the name of a provisioning template to run code after provisioning a host. On the Type tab, select Snippet . Click Submit to create your custom provisioning snippet. CLI procedure Before you create a template with the CLI, create a plain text file that contains your custom snippet. Create the template using hammer : 2.16. Custom provisioning snippet example for Red Hat Enterprise Linux You can use Custom Post snippets to call external APIs from within the provisioning template directly after provisioning a host. Kickstart default finish custom post Example for Red Hat Enterprise Linux 2.17. Associating templates with operating systems You can associate templates with operating systems in Satellite. The following example adds a provisioning template to an operating system entry. Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Select a provisioning template. On the Association tab, select all applicable operating systems. Click Submit to save your changes. CLI procedure Optional: View all templates: Optional: View all operating systems: Associate a template with an operating system: 2.18. Creating compute profiles You can use compute profiles to predefine virtual machine hardware details such as CPUs, memory, and storage. To use the CLI instead of the Satellite web UI, see the CLI procedure . A default installation of Red Hat Satellite contains three predefined profiles: 1-Small 2-Medium 3-Large You can apply compute profiles to all supported compute resources: Section 1.2, "Supported cloud providers" Section 1.3, "Supported virtualization infrastructures" Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles and click Create Compute Profile . In the Name field, enter a name for the profile. Click Submit . A new window opens with the name of the compute profile. In the new window, click the name of each compute resource and edit the attributes you want to set for this compute profile. CLI procedure Create a new compute profile: Set attributes for the compute profile: Optional: To update the attributes of a compute profile, specify the attributes you want to change. For example, to change the number of CPUs and memory size: Optional: To change the name of the compute profile, use the --new-name attribute: Additional resources For more information about creating compute profiles by using Hammer, enter hammer compute-profile --help . 2.19. Setting a default encrypted root password for hosts If you do not want to set a plain text default root password for the hosts that you provision, you can use a default encrypted password. The default root password can be inherited by a host group and consequentially by hosts in that group. If you change the password and reprovision the hosts in the group that inherits the password, the password will be overwritten on the hosts. Procedure Generate an encrypted password: Copy the password for later use. In the Satellite web UI, navigate to Administer > Settings . On the Settings page, select the Provisioning tab. In the Name column, navigate to Root password , and click Click to edit . Paste the encrypted password, and click Save . 2.20. Using noVNC to access virtual machines You can use your browser to access the VNC console of VMs created by Satellite. Satellite supports using noVNC on the following virtualization platforms: VMware Libvirt Red Hat Virtualization Prerequisites You must have a virtual machine created by Satellite. For existing virtual machines, ensure that the Display type in the Compute Resource settings is VNC . You must import the Katello root CA certificate into your Satellite Server. Adding a security exception in the browser is not enough for using noVNC. For more information, see Installing the Katello Root CA Certificate in Administering Red Hat Satellite . Procedure On your Satellite Server, configure the firewall to allow VNC service on ports 5900 to 5930. In the Satellite web UI, navigate to Infrastructure > Compute Resources and select the name of a compute resource. In the Virtual Machines tab, select the name of your virtual machine. Ensure the machine is powered on and then select Console .
[ "hammer host list --organization \" My_Organization \" --location \" My_Location \"", "hammer os create --architectures \"x86_64\" --description \" My_Operating_System \" --family \"Redhat\" --major 8 --media \"Red Hat\" --minor 8 --name \"Red Hat Enterprise Linux\" --partition-tables \" My_Partition_Table \" --provisioning-templates \" My_Provisioning_Template \"", "PARTID=USD(hammer --csv partition-table list | grep \"Kickstart default,\" | cut -d, -f1) PXEID=USD(hammer --csv template list --per-page=1000 | grep \"Kickstart default PXELinux\" | cut -d, -f1) SATID=USD(hammer --csv template list --per-page=1000 | grep \"provision\" | grep \",Kickstart default\" | cut -d, -f1) for i in USD(hammer --no-headers --csv os list | awk -F, {'print USD1'}) do hammer partition-table add-operatingsystem --id=\"USD{PARTID}\" --operatingsystem-id=\"USD{i}\" hammer template add-operatingsystem --id=\"USD{PXEID}\" --operatingsystem-id=\"USD{i}\" hammer os set-default-template --id=\"USD{i}\" --config-template-id=USD{PXEID} hammer os add-config-template --id=\"USD{i}\" --config-template-id=USD{SATID} hammer os set-default-template --id=\"USD{i}\" --config-template-id=USD{SATID} done", "hammer os info --id 1", "hammer architecture create --name \" My_Architecture \" --operatingsystems \" My_Operating_System \"", "hammer model create --hardware-model \" My_Hardware_Model \" --info \" My_Description \" --name \" My_Hardware_Model_Name \" --vendor-class \" My_Vendor_Class \"", "hammer medium list --organization \" My_Organization \"", "http://download.example.com/centos/USDversion/Server/USDarch/os/", "hammer medium create --locations \" My_Location \" --name \" My_Operating_System \" --organizations \" My_Organization \" --os-family \"Redhat\" --path \"http://download.example.com/centos/USDversion/Server/USDarch/os/\"", "zerombr clearpart --all --initlabel autopart", "zerombr clearpart --all --initlabel autopart", "hammer partition-table create --file \" ~/My_Partition_Table \" --locations \" My_Location \" --name \" My_Partition_Table \" --organizations \" My_Organization \" --os-family \"Redhat\" --snippet false", "zerombr clearpart --all --initlabel autopart <%= host_param('autopart_options') %>", "#Dynamic (do not remove this line) MEMORY=USD((`grep MemTotal: /proc/meminfo | sed 's/^MemTotal: *//'|sed 's/ .*//'` / 1024)) if [ \"USDMEMORY\" -lt 2048 ]; then SWAP_MEMORY=USD((USDMEMORY * 2)) elif [ \"USDMEMORY\" -lt 8192 ]; then SWAP_MEMORY=USDMEMORY elif [ \"USDMEMORY\" -lt 65536 ]; then SWAP_MEMORY=USD((USDMEMORY / 2)) else SWAP_MEMORY=32768 fi cat <<EOF > /tmp/diskpart.cfg zerombr clearpart --all --initlabel part /boot --fstype ext4 --size 200 --asprimary part swap --size \"USDSWAP_MEMORY\" part / --fstype ext4 --size 1024 --grow EOF", "hammer template create --file ~/my-template --locations \" My_Location \" --name \" My_Provisioning_Template \" --organizations \" My_Organization \" --type provision", "hammer template create --file \" /path/to/My_Snippet \" --locations \" My_Location \" --name \" My_Template_Name_custom_pre\" \\ --organizations \"_My_Organization \" --type snippet", "echo \"Calling API to report successful host deployment\" install -y curl ca-certificates curl -X POST -H \"Content-Type: application/json\" -d '{\"name\": \"<%= @host.name %>\", \"operating_system\": \"<%= @host.operatingsystem.name %>\", \"status\": \"provisioned\",}' \"https://api.example.com/\"", "hammer template list", "hammer os list", "hammer template add-operatingsystem --id My_Template_ID --operatingsystem-id My_Operating_System_ID", "hammer compute-profile create --name \" My_Compute_Profile \"", "hammer compute-profile values create --compute-attributes \" flavor=m1.small,cpus=2,memory=4GB,cpu_mode=default --compute-resource \" My_Compute_Resource \" --compute-profile \" My_Compute_Profile \" --volume size= 40GB", "hammer compute-profile values update --compute-resource \" My_Compute_Resource \" --compute-profile \" My_Compute_Profile \" --attributes \" cpus=2,memory=4GB \" --interface \" type=network,bridge=br1,index=1 \" --volume \"size= 40GB \"", "hammer compute-profile update --name \" My_Compute_Profile \" --new-name \" My_New_Compute_Profile \"", "python3 -c 'import crypt,getpass;pw=getpass.getpass(); print(crypt.crypt(pw)) if (pw==getpass.getpass(\"Confirm: \")) else exit()'", "firewall-cmd --add-port=5900-5930/tcp firewall-cmd --add-port=5900-5930/tcp --permanent" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/Configuring_Provisioning_Resources_provisioning
Chapter 4. Red Hat build of OpenJDK 8.0.342 release notes
Chapter 4. Red Hat build of OpenJDK 8.0.342 release notes The Red Hat build of OpenJDK 8.0.342 release might include new features. Additionally, this release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 8 releases. Note For all the other changes and security fixes, see OpenJDK 8u342 Released . 4.1. New features and enhancements Review the following release notes to understand new features and feature enhancements that have been included with the Red Hat build of OpenJDK 8.0.342 release: Customizing PKCS12 keystore generation The Red Hat build of OpenJDK 8 release includes a new system property and a new security property that allows you to customize the generation of PKCS #12 keystores. Users can customize algorithms and parameters for key protection, certificate protection, and MacData. You can find information about the properties, including a list of possible values, in the "PKCS12 KeyStore properties" section of the java.security file. Also, the Red Hat build of OpenJDK 8 release adds support for the following SHA-2 based HmacPBE algorithms to the SunJCE provider: HmacPBESHA224 HmacPBESHA256 HmacPBESHA384 HmacPBESHA512 HmacPBESHA512/224 HmacPBESHA512/256 See, JDK-8215293 (JDK Bug System) New system property to disable Windows Alternate Data Stream support in java.io.File The Windows implementation of java.io.File allows access to NTFS Alternate Data Streams (ADS) by default. These streams are structured in the format "filename:streamname". The Red Hat build of OpenJDK 8.0.342 release adds a system property that allows you to disable ADS support in java.io.File . To disable ADS support in java.io.File , set the system property jdk.io.File.enableADS to false . Important Disabling ADS support in java.io.File results in stricter path checking that prevents the use of special device files, such as NUL: . See, JDK-8285660 (JDK Bug System) Revised on 2024-05-10 09:05:34 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.342_and_8.0.345/openjdk-80342-features-and-enhancements_openjdk
function::target_set_pid
function::target_set_pid Name function::target_set_pid - Does pid descend from target process? Synopsis Arguments pid The pid of the process to query Description This function returns whether the given process-id is within the " target set " , that is whether it is a descendant of the top-level target process.
[ "target_set_pid(pid:)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-target-set-pid
Chapter 21. Troubleshooting
Chapter 21. Troubleshooting This chapter covers some of the more common usage problems that are encountered when installing Certificate System. Q: The init script returned an OK status, but my CA instance does not respond. Why? Q: I can't open the pkiconsole and I'm seeing Java exceptions in stdout. Q: I tried to run pkiconsole, and I got Socket exceptions in stdout. Why? Q: I tried to enroll for a certificate, and I got the error "request is not submitted...Subject Name Not Found"? Q: Why are my enrolled certificates not being published? Q: How do I open the pkiconsole utility from a remote host? Q: What do I do when the LDAP server is not responding? Q: The init script returned an OK status, but my CA instance does not respond. Why? A: This should not happen. Usually (but not always), this indicates a listener problem with the CA, but it can have many different causes. Check in the catalina.out , system , and debug log files for the instance to see what errors have occurred. This lists a couple of common errors. One situation is when there is a PID for the CA, indicating the process is running, but that no listeners have been opened for the server. This would return Java invocation class errors in the catalina.out file: This could mean that you have the wrong version of JSS or NSS. The process requires libnss3.so in the path. Check this with this command: If libnss3.so is not found, try unsetting the LD_LIBRARY_PATH variable and restart the CA. Q: I can't open the pkiconsole and I'm seeing Java exceptions in stdout. A: This probably means that you have the wrong JRE installed or the wrong JRE set as the default. Run alternatives --config java to see what JRE is selected. Red Hat Certificate System requires OpenJDK 1.8. Q: I tried to run pkiconsole , and I got Socket exceptions in stdout. Why? A: This means that there is a port problem. Either there are incorrect SSL settings for the administrative port (meaning there is bad configuration in the server.xml ) or the wrong port was given to access the admin interface. Port errors will look like the following: Q: I tried to enroll for a certificate, and I got the error "request is not submitted...Subject Name Not Found"? A: This most often occurs with a custom LDAP directory authentication profile and it shows that the directory operation failed. Particularly, it failed because it could not construct a working DN. The error will be in the CA's debug log. For example, this profile used a custom attribute ( MYATTRIBUTE ) that the directory didn't recognize: Any custom components - attributes, object classes, and unregistered OIDs - which are used in the subject DN can cause a failure. For most cases, the X.509 attributes defined in RHC 2253 should be used in subject DNs instead of custom attributes. Q: Why are my enrolled certificates not being published? A: This usually indicates that the CA is misconfigured. The main place to look for errors is the debug log, which can indicate where the misconfiguration is. For example, this has a problem with the mappers: Check the publishing configuration in the CA's CS.cfg file or in the Publishing tab of the CA console. In this example, the problem was in the mapping parameter, which must point to an existing LDAP suffix: Q: How do I open the pkiconsole utility from a remote host? A: In certain situations, administrators want to open the pkiconsole on the Certificate System server from a remote host. For that, administrators can use a Virtual Network Computing (VNC) connection: Setup a VNC server, for example, on the Red Hat Certificate System server. For details about remote desktop access, see the relevant section in the RHEL 8 documentation. Important The pkiconsole utility cannot run on a server with Federal Information Processing Standard (FIPS) mode enabled. Use a different host with Red Hat Enterprise Linux to run the VNC server, if FIPS mode is enabled on your Certificate System server. Note that this utility will be deprecated. Open the pkiconsole utility in the VNC window. For example: Note VNC viewers are available for different kind of operating systems. However, Red Hat supports only VNC viewers installed on Red Hat Enterprise Linux from the integrated repositories. Q: What do I do when the LDAP server is not responding? A: If the Red Hat Directory Server instance used for the internal database is not running, a connectivity issue occurred, or a TLS connection failure occurred, then you cannot connect to the subsystem instances which rely on it. The instance debug logs will specifically identify the problem with the LDAP connection. For example, if the LDAP server was not online: After fixing the underlying network problem, such as a cable was unplugged, the Red Hat Directory Server was stopped, significant packet loss occurred, or ensuring that the TLS connection can be recreated, stop and then start the Certificate System instance in question:
[ "Oct 29, 2010 4:15:44 PM org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-9080 java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:615) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:243) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:408) Caused by: java.lang.UnsatisfiedLinkError: jss4", "ldd /usr/lib64/libjss4.so", "unset LD_LIBRARY_PATH pki-server restart instance_name", "NSS Cipher Supported '0xff04' java.io.IOException: SocketException cannot read on socket at org.mozilla.jss.ssl.SSLSocket.read(SSLSocket.java:1006) at org.mozilla.jss.ssl.SSLInputStream.read(SSLInputStream.java:70) at com.netscape.admin.certsrv.misc.HttpInputStream.fill(HttpInputStream.java:303) at com.netscape.admin.certsrv.misc.HttpInputStream.readLine(HttpInputStream.java:224) at com.netscape.admin.certsrv.connection.JSSConnection.readHeader(JSSConnection.java:439) at com.netscape.admin.certsrv.connection.JSSConnection.initReadResponse(JSSConnection.java:430) at com.netscape.admin.certsrv.connection.JSSConnection.sendRequest(JSSConnection.java:344) at com.netscape.admin.certsrv.connection.AdminConnection.processRequest(AdminConnection.java:714) at com.netscape.admin.certsrv.connection.AdminConnection.sendRequest(AdminConnection.java:623) at com.netscape.admin.certsrv.connection.AdminConnection.sendRequest(AdminConnection.java:590) at com.netscape.admin.certsrv.connection.AdminConnection.authType(AdminConnection.java:323) at com.netscape.admin.certsrv.CMSServerInfo.getAuthType(CMSServerInfo.java:113) at com.netscape.admin.certsrv.CMSAdmin.run(CMSAdmin.java:499) at com.netscape.admin.certsrv.CMSAdmin.run(CMSAdmin.java:548) at com.netscape.admin.certsrv.Console.main(Console.java:1655)", "[14/Feb/2011:15:52:25][http-1244-Processor24]: BasicProfile: populate() policy setid =userCertSet [14/Feb/2011:15:52:25][http-1244-Processor24]: AuthTokenSubjectNameDefault: populate start [14/Feb/2011:15:52:25][http-1244-Processor24]: AuthTokenSubjectNameDefault: java.io.IOException: Unknown AVA keyword 'MYATTRIBUTE'. [14/Feb/2011:15:52:25][http-1244-Processor24]: ProfileSubmitServlet: populate Subject Name Not Found [14/Feb/2011:15:52:25][http-1244-Processor24]: CMSServlet: curDate=Mon Feb 14 15:52:25 PST 2011 id=caProfileSubmit time=13", "[31/Jul/2010:11:18:29][Thread-29]: LdapSimpleMap: cert subject dn:UID=me,[email protected],CN=yes [31/Jul/2010:11:18:29][Thread-29]: Error mapping: mapper=com.netscape.cms.publish.mappers.LdapSimpleMap@258fdcd0 error=Cannot find a match in the LDAP server for certificate. netscape.ldap.LDAPException: error result (32); matchedDN = ou=people,c=test; No such object", "ca.publish.mapper.instance.LdapUserCertMap.dnPattern=UID=USDsubj.UID,dc=publish", "pkiconsole https://server.example.com:8443/ca", "[02/Apr/2019:15:55:41][authorityMonitor]: authorityMonitor: failed to get LDAPConnection. Retrying in 1 second. [02/Apr/2019:15:55:42][authorityMonitor]: In LdapBoundConnFactory::getConn() [02/Apr/2019:15:55:42][authorityMonitor]: masterConn is null. [02/Apr/2019:15:55:42][authorityMonitor]: makeConnection: errorIfDown true [02/Apr/2019:15:55:42][authorityMonitor]: TCP Keep-Alive: true java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) [02/Apr/2019:15:55:42][authorityMonitor]: Can't create master connection in LdapBoundConnFactory::getConn! Could not connect to LDAP server host example911.redhat.com port 389 Error netscape.ldap.LDAPException: Unable to create socket: java.net.ConnectException: Connection refused (Connection refused) (-1)", "systemctl stop pki-tomcatd-nuxwdog@ instance_name .service", "systemctl start pki-tomcatd-nuxwdog@ instance_name .service" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/troubleshooting
8.17. Begin Installation
8.17. Begin Installation When all required sections of the Installation Summary screen have been completed, the admonition at the bottom of the menu screen disappears and the Begin Installation button becomes available. Figure 8.38. Ready to Install Warning Up to this point in the installation process, no lasting changes have been made on your computer. When you click Begin Installation , the installation program will allocate space on your hard drive and start to transfer Red Hat Enterprise Linux into this space. Depending on the partitioning option that you chose, this process might include erasing data that already exists on your computer. To revise any of the choices that you made up to this point, return to the relevant section of the Installation Summary screen. To cancel installation completely, click Quit or switch off your computer. To switch off most computers at this stage, press the power button and hold it down for a few seconds. If you have finished customizing your installation and are certain that you want to proceed, click Begin Installation . After you click Begin Installation , allow the installation process to complete. If the process is interrupted, for example, by you switching off or resetting the computer, or by a power outage, you will probably not be able to use your computer until you restart and complete the Red Hat Enterprise Linux installation process, or install a different operating system.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-write-changes-to-disk-x86
Chapter 76. stack
Chapter 76. stack This chapter describes the commands under the stack command. 76.1. stack abandon Abandon stack and output results. Usage: Table 76.1. Positional Arguments Value Summary <stack> Name or id of stack to abandon Table 76.2. Optional Arguments Value Summary -h, --help Show this help message and exit --output-file <output-file> File to output abandon results Table 76.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to json -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.2. stack adopt Adopt a stack. Usage: Table 76.7. Positional Arguments Value Summary <stack-name> Name of the stack to adopt Table 76.8. Optional Arguments Value Summary -h, --help Show this help message and exit -e <environment>, --environment <environment> Path to the environment. can be specified multiple times --timeout <timeout> Stack creation timeout in minutes --enable-rollback Enable rollback on create/update failure --parameter <key=value> Parameter values used to create the stack. can be specified multiple times --wait Wait until stack adopt completes --adopt-file <adopt-file> Path to adopt stack data file Table 76.9. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.10. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.11. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.12. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.3. stack cancel Cancel current task for a stack. Supported tasks for cancellation: * update * create Usage: Table 76.13. Positional Arguments Value Summary <stack> Stack(s) to cancel (name or id) Table 76.14. Optional Arguments Value Summary -h, --help Show this help message and exit --wait Wait for cancel to complete --no-rollback Cancel without rollback Table 76.15. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 76.16. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 76.17. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.18. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.4. stack check Check a stack. Usage: Table 76.19. Positional Arguments Value Summary <stack> Stack(s) to check update (name or id) Table 76.20. Optional Arguments Value Summary -h, --help Show this help message and exit --wait Wait for check to complete Table 76.21. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 76.22. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 76.23. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.24. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.5. stack create Create a stack. Usage: Table 76.25. Positional Arguments Value Summary <stack-name> Name of the stack to create Table 76.26. Optional Arguments Value Summary -h, --help Show this help message and exit -e <environment>, --environment <environment> Path to the environment. can be specified multiple times --timeout <timeout> Stack creating timeout in minutes --pre-create <resource> Name of a resource to set a pre-create hook to. Resources in nested stacks can be set using slash as a separator: ``nested_stack/another/my_resource``. You can use wildcards to match multiple stacks or resources: ``nested_stack/an*/*_resource``. This can be specified multiple times --enable-rollback Enable rollback on create/update failure --parameter <key=value> Parameter values used to create the stack. this can be specified multiple times --parameter-file <key=file> Parameter values from file used to create the stack. This can be specified multiple times. Parameter values would be the content of the file --wait Wait until stack goes to create_complete or CREATE_FAILED --tags <tag1,tag2... > A list of tags to associate with the stack --dry-run Do not actually perform the stack create, but show what would be created -t <template>, --template <template> Path to the template Table 76.27. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.28. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.29. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.30. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.6. stack delete Delete stack(s). Usage: Table 76.31. Positional Arguments Value Summary <stack> Stack(s) to delete (name or id) Table 76.32. Optional Arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt (assume yes) --wait Wait for stack delete to complete 76.7. stack environment show Show a stack's environment. Usage: Table 76.33. Positional Arguments Value Summary <NAME or ID> Name or id of stack to query Table 76.34. Optional Arguments Value Summary -h, --help Show this help message and exit Table 76.35. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.36. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.37. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.38. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.8. stack event list List events. Usage: Table 76.39. Positional Arguments Value Summary <stack> Name or id of stack to show events for Table 76.40. Optional Arguments Value Summary -h, --help Show this help message and exit --resource <resource> Name of resource to show events for. note: this cannot be specified with --nested-depth --filter <key=value> Filter parameters to apply on returned events --limit <limit> Limit the number of events returned --marker <id> Only return events that appear after the given id --nested-depth <depth> Depth of nested stacks from which to display events. Note: this cannot be specified with --resource --sort <key>[:<direction>] Sort output by selected keys and directions (asc or desc) (default: asc). Specify multiple times to sort on multiple keys. Sort key can be: "event_time" (default), "resource_name", "links", "logical_resource_id", "resource_status", "resource_status_reason", "physical_resource_id", or "id". You can leave the key empty and specify ":desc" for sorting by reverse time. --follow Print events until process is halted Table 76.41. Output Formatters Value Summary -f {csv,json,log,table,value,yaml}, --format {csv,json,log,table,value,yaml} The output format, defaults to log -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 76.42. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 76.43. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.44. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.9. stack event show Show event details. Usage: Table 76.45. Positional Arguments Value Summary <stack> Name or id of stack to show events for <resource> Name of the resource event belongs to <event> Id of event to display details for Table 76.46. Optional Arguments Value Summary -h, --help Show this help message and exit Table 76.47. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.48. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.49. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.50. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.10. stack export Export stack data json. Usage: Table 76.51. Positional Arguments Value Summary <stack> Name or id of stack to export Table 76.52. Optional Arguments Value Summary -h, --help Show this help message and exit --output-file <output-file> File to output export data Table 76.53. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to json -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.54. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.55. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.56. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.11. stack failures list Show information about failed stack resources. Usage: Table 76.57. Positional Arguments Value Summary <stack> Stack to display (name or id) Table 76.58. Optional Arguments Value Summary -h, --help Show this help message and exit --long Show full deployment logs in output 76.12. stack file list Show a stack's files map. Usage: Table 76.59. Positional Arguments Value Summary <NAME or ID> Name or id of stack to query Table 76.60. Optional Arguments Value Summary -h, --help Show this help message and exit Table 76.61. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.62. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.63. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.64. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.13. stack hook clear Clear resource hooks on a given stack. Usage: Table 76.65. Positional Arguments Value Summary <stack> Stack to display (name or id) <resource> Resource names with hooks to clear. resources in nested stacks can be set using slash as a separator: ``nested_stack/another/my_resource``. You can use wildcards to match multiple stacks or resources: ``nested_stack/an*/*_resource`` Table 76.66. Optional Arguments Value Summary -h, --help Show this help message and exit --pre-create Clear the pre-create hooks --pre-update Clear the pre-update hooks --pre-delete Clear the pre-delete hooks 76.14. stack hook poll List resources with pending hook for a stack. Usage: Table 76.67. Positional Arguments Value Summary <stack> Stack to display (name or id) Table 76.68. Optional Arguments Value Summary -h, --help Show this help message and exit --nested-depth <nested-depth> Depth of nested stacks from which to display hooks Table 76.69. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 76.70. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 76.71. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.72. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.15. stack list List stacks. Usage: Table 76.73. Optional Arguments Value Summary -h, --help Show this help message and exit --deleted Include soft-deleted stacks in the stack listing --nested Include nested stacks in the stack listing --hidden Include hidden stacks in the stack listing --property <key=value> Filter properties to apply on returned stacks (repeat to filter on multiple properties) --tags <tag1,tag2... > List of tags to filter by. can be combined with --tag- mode to specify how to filter tags --tag-mode <mode> Method of filtering tags. must be one of "any", "not", or "not-any". If not specified, multiple tags will be combined with the boolean AND expression --limit <limit> The number of stacks returned --marker <id> Only return stacks that appear after the given id --sort <key>[:<direction>] Sort output by selected keys and directions (asc or desc) (default: asc). Specify multiple times to sort on multiple properties --all-projects Include all projects (admin only) --short List fewer fields in output --long List additional fields in output, this is implied by --all-projects Table 76.74. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 76.75. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 76.76. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.77. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.16. stack output list List stack outputs. Usage: Table 76.78. Positional Arguments Value Summary <stack> Name or id of stack to query Table 76.79. Optional Arguments Value Summary -h, --help Show this help message and exit Table 76.80. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 76.81. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 76.82. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.83. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.17. stack output show Show stack output. Usage: Table 76.84. Positional Arguments Value Summary <stack> Name or id of stack to query <output> Name of an output to display Table 76.85. Optional Arguments Value Summary -h, --help Show this help message and exit --all Display all stack outputs Table 76.86. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.87. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.88. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.89. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.18. stack resource list List stack resources. Usage: Table 76.90. Positional Arguments Value Summary <stack> Name or id of stack to query Table 76.91. Optional Arguments Value Summary -h, --help Show this help message and exit --long Enable detailed information presented for each resource in resource list -n <nested-depth>, --nested-depth <nested-depth> Depth of nested stacks from which to display resources --filter <key=value> Filter parameters to apply on returned resources based on their name, status, type, action, id and physical_resource_id Table 76.92. Output Formatters Value Summary -f {csv,dot,json,table,value,yaml}, --format {csv,dot,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 76.93. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 76.94. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.95. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.19. stack resource mark unhealthy Set resource's health. Usage: Table 76.96. Positional Arguments Value Summary <stack> Name or id of stack the resource belongs to <resource> Name of the resource reason Reason for state change Table 76.97. Optional Arguments Value Summary -h, --help Show this help message and exit --reset Set the resource as healthy 76.20. stack resource metadata Show resource metadata Usage: Table 76.98. Positional Arguments Value Summary <stack> Stack to display (name or id) <resource> Name of the resource to show the metadata for Table 76.99. Optional Arguments Value Summary -h, --help Show this help message and exit Table 76.100. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to json -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.101. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.102. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.103. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.21. stack resource show Display stack resource. Usage: Table 76.104. Positional Arguments Value Summary <stack> Name or id of stack to query <resource> Name of resource Table 76.105. Optional Arguments Value Summary -h, --help Show this help message and exit --with-attr <attribute> Attribute to show, can be specified multiple times Table 76.106. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.107. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.108. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.109. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.22. stack resource signal Signal a resource with optional data. Usage: Table 76.110. Positional Arguments Value Summary <stack> Name or id of stack the resource belongs to <resource> Name of the resoure to signal Table 76.111. Optional Arguments Value Summary -h, --help Show this help message and exit --data <data> Json data to send to the signal handler --data-file <data-file> File containing json data to send to the signal handler 76.23. stack resume Resume a stack. Usage: Table 76.112. Positional Arguments Value Summary <stack> Stack(s) to resume (name or id) Table 76.113. Optional Arguments Value Summary -h, --help Show this help message and exit --wait Wait for resume to complete Table 76.114. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 76.115. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 76.116. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.117. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.24. stack show Show stack details. Usage: Table 76.118. Positional Arguments Value Summary <stack> Stack to display (name or id) Table 76.119. Optional Arguments Value Summary -h, --help Show this help message and exit --no-resolve-outputs Do not resolve outputs of the stack. Table 76.120. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.121. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.122. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.123. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.25. stack snapshot create Create stack snapshot. Usage: Table 76.124. Positional Arguments Value Summary <stack> Name or id of stack Table 76.125. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> Name of snapshot Table 76.126. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.127. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.128. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.129. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.26. stack snapshot delete Delete stack snapshot. Usage: Table 76.130. Positional Arguments Value Summary <stack> Name or id of stack <snapshot> Id of stack snapshot Table 76.131. Optional Arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt (assume yes) 76.27. stack snapshot list List stack snapshots. Usage: Table 76.132. Positional Arguments Value Summary <stack> Name or id of stack containing the snapshots Table 76.133. Optional Arguments Value Summary -h, --help Show this help message and exit Table 76.134. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 76.135. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 76.136. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.137. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.28. stack snapshot restore Restore stack snapshot Usage: Table 76.138. Positional Arguments Value Summary <stack> Name or id of stack containing the snapshot <snapshot> Id of the snapshot to restore Table 76.139. Optional Arguments Value Summary -h, --help Show this help message and exit 76.29. stack snapshot show Show stack snapshot. Usage: Table 76.140. Positional Arguments Value Summary <stack> Name or id of stack containing the snapshot <snapshot> Id of the snapshot to show Table 76.141. Optional Arguments Value Summary -h, --help Show this help message and exit Table 76.142. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.143. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.144. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.145. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.30. stack suspend Suspend a stack. Usage: Table 76.146. Positional Arguments Value Summary <stack> Stack(s) to suspend (name or id) Table 76.147. Optional Arguments Value Summary -h, --help Show this help message and exit --wait Wait for suspend to complete Table 76.148. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 76.149. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 76.150. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.151. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.31. stack template show Display stack template. Usage: Table 76.152. Positional Arguments Value Summary <stack> Name or id of stack to query Table 76.153. Optional Arguments Value Summary -h, --help Show this help message and exit Table 76.154. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.155. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.156. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.157. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 76.32. stack update Update a stack. Usage: Table 76.158. Positional Arguments Value Summary <stack> Name or id of stack to update Table 76.159. Optional Arguments Value Summary -h, --help Show this help message and exit -t <template>, --template <template> Path to the template -e <environment>, --environment <environment> Path to the environment. can be specified multiple times --pre-update <resource> Name of a resource to set a pre-update hook to. Resources in nested stacks can be set using slash as a separator: ``nested_stack/another/my_resource``. You can use wildcards to match multiple stacks or resources: ``nested_stack/an*/*_resource``. This can be specified multiple times --timeout <timeout> Stack update timeout in minutes --rollback <value> Set rollback on update failure. value "enabled" sets rollback to enabled. Value "disabled" sets rollback to disabled. Value "keep" uses the value of existing stack to be updated (default) --dry-run Do not actually perform the stack update, but show what would be changed --show-nested Show nested stacks when performing --dry-run --parameter <key=value> Parameter values used to create the stack. this can be specified multiple times --parameter-file <key=file> Parameter values from file used to create the stack. This can be specified multiple times. Parameter value would be the content of the file --existing Re-use the template, parameters and environment of the current stack. If the template argument is omitted then the existing template is used. If no --environment is specified then the existing environment is used. Parameters specified in --parameter will patch over the existing values in the current stack. Parameters omitted will keep the existing values --clear-parameter <parameter> Remove the parameters from the set of parameters of current stack for the stack-update. The default value in the template will be used. This can be specified multiple times --tags <tag1,tag2... > An updated list of tags to associate with the stack --wait Wait until stack goes to update_complete or UPDATE_FAILED --converge Stack update with observe on reality. Table 76.160. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 76.161. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 76.162. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 76.163. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack stack abandon [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--output-file <output-file>] <stack>", "openstack stack adopt [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-e <environment>] [--timeout <timeout>] [--enable-rollback] [--parameter <key=value>] [--wait] --adopt-file <adopt-file> <stack-name>", "openstack stack cancel [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--wait] [--no-rollback] <stack> [<stack> ...]", "openstack stack check [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--wait] <stack> [<stack> ...]", "openstack stack create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-e <environment>] [--timeout <timeout>] [--pre-create <resource>] [--enable-rollback] [--parameter <key=value>] [--parameter-file <key=file>] [--wait] [--tags <tag1,tag2...>] [--dry-run] -t <template> <stack-name>", "openstack stack delete [-h] [-y] [--wait] <stack> [<stack> ...]", "openstack stack environment show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <NAME or ID>", "openstack stack event list [-h] [-f {csv,json,log,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--resource <resource>] [--filter <key=value>] [--limit <limit>] [--marker <id>] [--nested-depth <depth>] [--sort <key>[:<direction>]] [--follow] <stack>", "openstack stack event show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack> <resource> <event>", "openstack stack export [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--output-file <output-file>] <stack>", "openstack stack failures list [-h] [--long] <stack>", "openstack stack file list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <NAME or ID>", "openstack stack hook clear [-h] [--pre-create] [--pre-update] [--pre-delete] <stack> <resource> [<resource> ...]", "openstack stack hook poll [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--nested-depth <nested-depth>] <stack>", "openstack stack list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--deleted] [--nested] [--hidden] [--property <key=value>] [--tags <tag1,tag2...>] [--tag-mode <mode>] [--limit <limit>] [--marker <id>] [--sort <key>[:<direction>]] [--all-projects] [--short] [--long]", "openstack stack output list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] <stack>", "openstack stack output show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all] <stack> [<output>]", "openstack stack resource list [-h] [-f {csv,dot,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long] [-n <nested-depth>] [--filter <key=value>] <stack>", "openstack stack resource mark unhealthy [-h] [--reset] <stack> <resource> [reason]", "openstack stack resource metadata [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack> <resource>", "openstack stack resource show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--with-attr <attribute>] <stack> <resource>", "openstack stack resource signal [-h] [--data <data>] [--data-file <data-file>] <stack> <resource>", "openstack stack resume [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--wait] <stack> [<stack> ...]", "openstack stack show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--no-resolve-outputs] <stack>", "openstack stack snapshot create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] <stack>", "openstack stack snapshot delete [-h] [-y] <stack> <snapshot>", "openstack stack snapshot list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] <stack>", "openstack stack snapshot restore [-h] <stack> <snapshot>", "openstack stack snapshot show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack> <snapshot>", "openstack stack suspend [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--wait] <stack> [<stack> ...]", "openstack stack template show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack>", "openstack stack update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-t <template>] [-e <environment>] [--pre-update <resource>] [--timeout <timeout>] [--rollback <value>] [--dry-run] [--show-nested] [--parameter <key=value>] [--parameter-file <key=file>] [--existing] [--clear-parameter <parameter>] [--tags <tag1,tag2...>] [--wait] [--converge] <stack>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/stack
Chapter 5. Exporting applications
Chapter 5. Exporting applications As a developer, you can export your application in the ZIP file format. Based on your needs, import the exported application to another project in the same cluster or a different cluster by using the Import YAML option in the +Add view. Exporting your application helps you to reuse your application resources and saves your time. 5.1. Prerequisites You have installed the gitops-primer Operator from the OperatorHub. Note The Export application option is disabled in the Topology view even after installing the gitops-primer Operator. You have created an application in the Topology view to enable Export application . 5.2. Procedure In the developer perspective, perform one of the following steps: Navigate to the +Add view and click Export application in the Application portability tile. Navigate to the Topology view and click Export application . Click OK in the Export Application dialog box. A notification opens to confirm that the export of resources from your project has started. Optional steps that you might need to perform in the following scenarios: If you have started exporting an incorrect application, click Export application Cancel Export . If your export is already in progress and you want to start a fresh export, click Export application Restart Export . If you want to view logs associated with exporting an application, click Export application and the View Logs link. After a successful export, click Download in the dialog box to download application resources in ZIP format onto your machine.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/building_applications/odc-exporting-applications
function::ubacktrace
function::ubacktrace Name function::ubacktrace - Hex backtrace of current user-space task stack. Synopsis Arguments None Description Return a string of hex addresses that are a backtrace of the stack of the current task. Output may be truncated as per maximum string length. Returns empty string when current probe point cannot determine user backtrace. See backtrace for kernel traceback. Note To get (full) backtraces for user space applications and shared shared libraries not mentioned in the current script run stap with -d /path/to/exe-or-so and/or add --ldd to load all needed unwind data.
[ "ubacktrace:string()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ubacktrace
Installation configuration
Installation configuration OpenShift Container Platform 4.15 Cluster-wide configuration during installations Red Hat OpenShift Documentation Team
[ "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane", "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane", "chmod +x butane", "echo USDPATH", "butane <butane_file>", "variant: openshift version: 4.15.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-custom.bu -o ./99-worker-custom.yaml", "oc create -f 99-worker-custom.yaml", "./openshift-install create manifests --dir <installation_directory>", "cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "cd kmods-via-containers/", "sudo make install", "sudo systemctl daemon-reload", "cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "cd kvc-simple-kmod", "cat simple-kmod.conf", "KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"", "sudo make install", "sudo kmods-via-containers build simple-kmod USD(uname -r)", "sudo systemctl enable [email protected] --now", "sudo systemctl status [email protected]", "● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "dmesg | grep 'Hello world'", "[ 6420.761332] Hello world from simple_kmod.", "sudo cat /proc/simple-procfs-kmod", "simple-procfs-kmod number = 0", "sudo spkut 44", "KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "FAKEROOT=USD(mktemp -d)", "cd kmods-via-containers", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd ../kvc-simple-kmod", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree", "variant: openshift version: 4.15.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true", "butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml", "oc create -f 99-simple-kmod.yaml", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "variant: openshift version: 4.15.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: \"{\\\"payload\\\": \\\"...\\\", \\\"protected\\\": \\\"...\\\", \\\"signature\\\": \\\"...\\\"}\" 4 threshold: 2 5 openshift: fips: true", "sudo yum install clevis", "clevis-encrypt-tang '{\"url\":\"http://tang1.example.com:7500\"}' < /dev/null > /dev/null 1", "The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1", "curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws", "{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}", "clevis-encrypt-tang '{\"url\":\"http://tang2.example.com:7500\",\"adv\":\"adv.jws\"}' < /dev/null > /dev/null", "./openshift-install create manifests --dir <installation_directory> 1", "variant: openshift version: 4.15.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: \"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}\" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13", "butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml", "oc debug node/compute-1", "chroot /host", "cryptsetup status root", "/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write", "clevis luks list -d /dev/sda4 1", "1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1", "cat /proc/mdstat", "Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>", "mdadm --detail /dev/md126", "/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8", "mount | grep /dev/md", "/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)", "variant: openshift version: 4.15.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true", "variant: openshift version: 4.15.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true", "butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1", "mdadm -CR /dev/md/imsm0 -e imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1", "mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean", "mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0", "mdadm -S /dev/md/dummy mdadm -S /dev/md/coreos mdadm --kill-subarray=0 /dev/md/imsm0", "mdadm -A /dev/md/coreos /dev/md/imsm0", "mdadm --detail --export /dev/md/imsm0", "coreos-installer install /dev/md/coreos --append-karg rd.md.uuid=<md_UUID> 1", "variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml", "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: \"v2\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installation_configuration/index
27.3. Defining the Console
27.3. Defining the Console The pam_console.so module uses the /etc/security/console.perms file to determine the permissions for users at the system console. The syntax of the file is very flexible; you can edit the file so that these instructions no longer apply. However, the default file has a line that looks like this: When users log in, they are attached to some sort of named terminal, either an X server with a name like :0 or mymachine.example.com:1.0 , or a device like /dev/ttyS0 or /dev/pts/2 . The default is to define that local virtual consoles and local X servers are considered local, but if you want to consider the serial terminal to you on port /dev/ttyS1 to also be local, you can change that line to read:
[ "<console>=tty[0-9][0-9]* vc/[0-9][0-9]* :[0-9]\\.[0-9] :[0-9]", "<console>=tty[0-9][0-9]* vc/[0-9][0-9]* :[0-9]\\.[0-9] :[0-9] /dev/ttyS1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Console_Access-Defining_the_Console
Interactively installing RHEL over the network
Interactively installing RHEL over the network Red Hat Enterprise Linux 9 Installing RHEL on several systems using network resources or on a headless system with the graphical installer Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/index
probe::scheduler.cpu_off
probe::scheduler.cpu_off Name probe::scheduler.cpu_off - Process is about to stop running on a cpu Synopsis scheduler.cpu_off Values task_prev the process leaving the cpu (same as current) idle boolean indicating whether current is the idle process name name of the probe point task_next the process replacing current Context The process leaving the cpu.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-scheduler-cpu-off
1.2. Installing Pacemaker configuration tools
1.2. Installing Pacemaker configuration tools You can use the following yum install command to install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel. Alternately, you can install the Red Hat High Availability Add-On software packages along with only the fence agent that you require with the following command. The following command displays a listing of the available fence agents. The lvm2-cluster and gfs2-utils packages are part of ResilientStorage channel. You can install them, as needed, with the following command. Warning After you install the Red Hat High Availability Add-On packages, you should ensure that your software update preferences are set so that nothing is installed automatically. Installation on a running cluster can cause unexpected behaviors.
[ "yum install pcs pacemaker fence-agents-all", "yum install pcs pacemaker fence-agents- model", "rpm -q -a | grep fence fence-agents-rhevm-4.0.2-3.el7.x86_64 fence-agents-ilo-mp-4.0.2-3.el7.x86_64 fence-agents-ipmilan-4.0.2-3.el7.x86_64", "yum install lvm2-cluster gfs2-utils" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-installation-HAAR
Chapter 1. Understanding API tiers
Chapter 1. Understanding API tiers Important This guidance does not cover layered OpenShift Container Platform offerings. API tiers for bare-metal configurations also apply to virtualized configurations except for any feature that directly interacts with hardware. Those features directly related to hardware have no application operating environment (AOE) compatibility level beyond that which is provided by the hardware vendor. For example, applications that rely on Graphics Processing Units (GPU) features are subject to the AOE compatibility provided by the GPU vendor driver. API tiers in a cloud environment for cloud specific integration points have no API or AOE compatibility level beyond that which is provided by the hosting cloud vendor. For example, APIs that exercise dynamic management of compute, ingress, or storage are dependent upon the underlying API capabilities exposed by the cloud platform. Where a cloud vendor modifies a prerequisite API, Red Hat will provide commercially reasonable efforts to maintain support for the API with the capability presently offered by the cloud infrastructure vendor. Red Hat requests that application developers validate that any behavior they depend on is explicitly defined in the formal API documentation to prevent introducing dependencies on unspecified implementation-specific behavior or dependencies on bugs in a particular implementation of an API. For example, new releases of an ingress router may not be compatible with older releases if an application uses an undocumented API or relies on undefined behavior. 1.1. API tiers All commercially supported APIs, components, and features are associated under one of the following support levels: API tier 1 APIs and application operating environments (AOEs) are stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. API tier 2 APIs and AOEs are stable within a major release for a minimum of 9 months or 3 minor releases from the announcement of deprecation, whichever is longer. API tier 3 This level applies to languages, tools, applications, and optional Operators included with OpenShift Container Platform through Operator Hub. Each component will specify a lifetime during which the API and AOE will be supported. Newer versions of language runtime specific components will attempt to be as API and AOE compatible from minor version to minor version as possible. Minor version to minor version compatibility is not guaranteed, however. Components and developer tools that receive continuous updates through the Operator Hub, referred to as Operators and operands, should be considered API tier 3. Developers should use caution and understand how these components may change with each minor release. Users are encouraged to consult the compatibility guidelines documented by the component. API tier 4 No compatibility is provided. API and AOE can change at any point. These capabilities should not be used by applications needing long-term support. It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for use by actors external to the Operator and are intended to be hidden. If any CRD is not meant for use by actors external to the Operator, the operators.operatorframework.io/internal-objects annotation in the Operators ClusterServiceVersion (CSV) should be specified to signal that the corresponding resource is internal use only and the CRD may be explicitly labeled as tier 4. 1.2. Mapping API tiers to API groups For each API tier defined by Red Hat, we provide a mapping table for specific API groups where the upstream communities are committed to maintain forward compatibility. Any API group that does not specify an explicit compatibility level and is not specifically discussed below is assigned API tier 3 by default except for v1alpha1 APIs which are assigned tier 4 by default. 1.2.1. Support for Kubernetes API groups API groups that end with the suffix *.k8s.io or have the form version.<name> with no suffix are governed by the Kubernetes deprecation policy and follow a general mapping between API version exposed and corresponding support tier unless otherwise specified. API version example API tier v1 Tier 1 v1beta1 Tier 2 v1alpha1 Tier 4 1.2.2. Support for OpenShift API groups API groups that end with the suffix *.openshift.io are governed by the OpenShift Container Platform deprecation policy and follow a general mapping between API version exposed and corresponding compatibility level unless otherwise specified. API version example API tier apps.openshift.io/v1 Tier 1 authorization.openshift.io/v1 Tier 1, some tier 1 deprecated build.openshift.io/v1 Tier 1, some tier 1 deprecated config.openshift.io/v1 Tier 1 image.openshift.io/v1 Tier 1 network.openshift.io/v1 Tier 1 network.operator.openshift.io/v1 Tier 1 oauth.openshift.io/v1 Tier 1 imagecontentsourcepolicy.operator.openshift.io/v1alpha1 Tier 1 project.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 route.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 security.openshift.io/v1 Tier 1 except for RangeAllocation (tier 4) and *Reviews (tier 2) template.openshift.io/v1 Tier 1 console.openshift.io/v1 Tier 2 1.2.3. Support for Monitoring API groups API groups that end with the suffix monitoring.coreos.com have the following mapping: API version example API tier v1 Tier 1 v1alpha1 Tier 1 v1beta1 Tier 1 1.2.4. Support for Operator Lifecycle Manager API groups Operator Lifecycle Manager (OLM) provides APIs that include API groups with the suffix operators.coreos.com . These APIs have the following mapping: API version example API tier v2 Tier 1 v1 Tier 1 v1alpha1 Tier 1 1.3. API deprecation policy OpenShift Container Platform is composed of many components sourced from many upstream communities. It is anticipated that the set of components, the associated API interfaces, and correlated features will evolve over time and might require formal deprecation in order to remove the capability. 1.3.1. Deprecating parts of the API OpenShift Container Platform is a distributed system where multiple components interact with a shared state managed by the cluster control plane through a set of structured APIs. Per Kubernetes conventions, each API presented by OpenShift Container Platform is associated with a group identifier and each API group is independently versioned. Each API group is managed in a distinct upstream community including Kubernetes, Metal3, Multus, Operator Framework, Open Cluster Management, OpenShift itself, and more. While each upstream community might define their own unique deprecation policy for a given API group and version, Red Hat normalizes the community specific policy to one of the compatibility levels defined prior based on our integration in and awareness of each upstream community to simplify end-user consumption and support. The deprecation policy and schedule for APIs vary by compatibility level. The deprecation policy covers all elements of the API including: REST resources, also known as API objects Fields of REST resources Annotations on REST resources, excluding version-specific qualifiers Enumerated or constant values Other than the most recent API version in each group, older API versions must be supported after their announced deprecation for a duration of no less than: API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed. The following rules apply to all tier 1 APIs: API elements can only be removed by incrementing the version of the group. API objects must be able to round-trip between API versions without information loss, with the exception of whole REST resources that do not exist in some versions. In cases where equivalent fields do not exist between versions, data will be preserved in the form of annotations during conversion. API versions in a given group can not deprecate until a new API version at least as stable is released, except in cases where the entire API object is being removed. 1.3.2. Deprecating CLI elements Client-facing CLI commands are not versioned in the same way as the API, but are user-facing component systems. The two major ways a user interacts with a CLI are through a command or flag, which is referred to in this context as CLI elements. All CLI elements default to API tier 1 unless otherwise noted or the CLI depends on a lower tier API. Element API tier Generally available (GA) Flags and commands Tier 1 Technology Preview Flags and commands Tier 3 Developer Preview Flags and commands Tier 4 1.3.3. Deprecating an entire component The duration and schedule for deprecating an entire component maps directly to the duration associated with the highest API tier of an API exposed by that component. For example, a component that surfaced APIs with tier 1 and 2 could not be removed until the tier 1 deprecation schedule was met. API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/api_overview/understanding-api-support-tiers
Chapter 2. Learn more about OpenShift Container Platform
Chapter 2. Learn more about OpenShift Container Platform Use the following sections to find content to help you learn about and use OpenShift Container Platform. 2.1. Architect Learn about OpenShift Container Platform Plan an OpenShift Container Platform deployment Additional resources Enterprise Kubernetes with OpenShift Tested platforms OpenShift blog Architecture Security and compliance What's new in OpenShift Container Platform Networking OpenShift Container Platform life cycle Backup and restore 2.2. Cluster Administrator Learn about OpenShift Container Platform Deploy OpenShift Container Platform Manage OpenShift Container Platform Additional resources Enterprise Kubernetes with OpenShift Installing OpenShift Container Platform Using Insights to identify issues with your cluster Getting Support Architecture Machine configuration overview Logging OpenShift Knowledgebase articles OpenShift Interactive Learning Portal Networking About OpenShift Container Platform monitoring OpenShift Container Platform Life Cycle Storage Backup and restore Updating a cluster 2.3. Application Site Reliability Engineer (App SRE) Learn about OpenShift Container Platform Deploy and manage applications Additional resources OpenShift Interactive Learning Portal Projects Getting Support Architecture Operators OpenShift Knowledgebase articles Logging OpenShift Container Platform Life Cycle Blogs about logging Monitoring 2.4. Developer Learn about application development in OpenShift Container Platform Deploy applications Getting started with OpenShift for developers (interactive tutorial) Creating applications Red Hat Developers site Builds Red Hat OpenShift Dev Spaces (formerly Red Hat CodeReady Workspaces) Operators Images Developer-focused CLI
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/about/learn_more_about_openshift
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/net/8.0/html/release_notes_for_.net_8.0_rpm_packages/proc_providing-feedback-on-red-hat-documentation_release-notes-for-dotnet-rpms
Chapter 3. Usage
Chapter 3. Usage This chapter describes the necessary steps for using Red Hat Software Collections 3.5, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl526 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql10 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.1.3. Running a System Service from a Software Collection Running a System Service from a Software Collection in Red Hat Enterprise Linux 6 Software Collections that include system services install corresponding init scripts in the /etc/rc.d/init.d/ directory. To start such a service in the current session, type the following at a shell prompt as root : service software_collection - service_name start Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : chkconfig software_collection - service_name on For example, to start the postgresql service from the rh-postgresql96 Software Collection and enable it in runlevels 2, 3, 4, and 5, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 6, refer to the Red Hat Enterprise Linux 6 Deployment Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . Running a System Service from a Software Collection in Red Hat Enterprise Linux 7 In Red Hat Enterprise Linux 7, init scripts have been replaced by systemd service unit files, which end with the .service file extension and serve a similar purpose as init scripts. To start a service in the current session, execute the following command as root : systemctl start software_collection - service_name .service Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : systemctl enable software_collection - service_name .service For example, to start the postgresql service from the rh-postgresql10 Software Collection and enable it at boot time, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, refer to the Red Hat Enterprise Linux 7 System Administrator's Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb102 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . The following container images are available with Red Hat Software Collections 3.5: rhscl/perl-530-rhel7 rhscl/python-38-rhel7 rhscl/ruby-26-rhel7 rhscl/httpd-24-rhel7 rhscl/varnish-6-rhel7 rhscl/devtoolset-9-toolchain-rhel7 rhscl/devtoolset-9-perftools-rhel7 The following container images are based on Red Hat Software Collections 3.4: rhscl/nodejs-12-rhel7 rhscl/php-73-rhel7 rhscl/nginx-116-rhel7 rhscl/postgresql-12-rhel7 The following container images are based on Red Hat Software Collections 3.3: rhscl/mariadb-103-rhel7 rhscl/redis-5-rhel7 rhscl/ruby-26-rhel7 rhscl/devtoolset-8-toolchain-rhel7 rhscl/devtoolset-8-perftools-rhel7 The following container images are based on Red Hat Software Collections 3.2: rhscl/mysql-80-rhel7 rhscl/nginx-114-rhel7 rhscl/php-72-rhel7 rhscl/nodejs-10-rhel7 The following container images are based on Red Hat Software Collections 3.1: rhscl/mongodb-36-rhel7 rhscl/perl-526-rhel7 rhscl/postgresql-10-rhel7 rhscl/ruby-25-rhel7 rhscl/varnish-5-rhel7 The following container images are based on Red Hat Software Collections 3.0: rhscl/mariadb-102-rhel7 rhscl/mongodb-34-rhel7 rhscl/postgresql-96-rhel7 rhscl/python-36-rhel7 The following container images are based on Red Hat Software Collections 2: rhscl/python-27-rhel7 rhscl/s2i-base-rhel7
[ "~]USD scl enable rh-perl526 'perl hello.pl' Hello, World!", "~]USD scl enable python27 rh-postgresql10 bash", "~]USD echo USDX_SCLS python27 rh-postgresql10", "~]# service rh-postgresql96-postgresql start Starting rh-postgresql96-postgresql service: [ OK ] ~]# chkconfig rh-postgresql96-postgresql on", "~]# systemctl start rh-postgresql10-postgresql.service ~]# systemctl enable rh-postgresql10-postgresql.service", "~]USD scl enable rh-mariadb102 \"man rh-mariadb102\"" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.5_release_notes/chap-Usage
D.3. Supported Language Subtypes
D.3. Supported Language Subtypes Language subtypes can be used by clients to determine specific values for which to search. For more information on using language subtypes, see Section 3.1.9, "Updating an Entry in an Internationalized Directory" . Table D.1, "Supported Language Subtypes" lists the supported language subtypes for Directory Server. Table D.1. Supported Language Subtypes Language Tag Language af Afrikaans be Belarusian bg Bulgarian ca Catalan cs Czech da Danish de German el Greek en English es Spanish eu Basque fi Finnish fo Faroese fr French ga Irish gl Galician hr Croatian hu Hungarian id Indonesian is Icelandic it Italian ja Japanese ko Korean nl Dutch no Norwegian pl Polish pt Portuguese ro Romanian ru Russian sk Slovak sl Slovenian sq Albanian sr Serbian sv Swedish tr Turkish uk Ukrainian zh Chinese
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/supported_language_subtypes
Configuring sidecar containers on Cryostat
Configuring sidecar containers on Cryostat Red Hat build of Cryostat 2 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/configuring_sidecar_containers_on_cryostat/index
Chapter 9. Ceph Storage Parameters
Chapter 9. Ceph Storage Parameters You can modify your Ceph Storage cluster with Ceph Storage parameters. Parameter Description CephAnsibleDisksConfig Disks configuration settings. The default value is {'devices': [], 'osd_scenario': 'lvm', 'osd_objectstore': 'bluestore'} . CephAnsibleEnvironmentVariables Mapping of Ansible environment variables to override defaults. CephAnsibleExtraConfig Extra vars for the ceph-ansible playbook. CephAnsiblePlaybook List of paths to the ceph-ansible playbooks to execute. If not specified, the playbook will be determined automatically depending on type of operation being performed (deploy/update/upgrade). The default value is ['default'] . CephAnsiblePlaybookVerbosity The number of -v , -vv , etc. passed to ansible-playbook command. The default value is 1 . CephAnsibleRepo The repository that should be used to install the right ceph-ansible package. This value can be used by tripleo-validations to double check the right ceph-ansible version is installed. The default value is centos-ceph-nautilus . CephAnsibleSkipClient This boolean (when true) prevents the ceph-ansible client role execution by adding the ceph-ansible tag ceph_client to the --skip-tags list. The default value is true . CephAnsibleSkipTags List of ceph-ansible tags to skip. The default value is package-install,with_pkg . CephAnsibleWarning In particular scenarios we want this validation to show the warning but don't fail because the package is installed on the system but repos are disabled. The default value is true . CephCertificateKeySize Override the private key size used when creating the certificate for this service. CephClientKey The Ceph client key. Currently only used for external Ceph deployments to create the openstack user keyring. Can be created with: ceph-authtool --gen-print-key CephClusterFSID The Ceph cluster FSID. Must be a UUID. CephClusterName The Ceph cluster name. The default value is ceph . CephConfigOverrides Extra configuration settings to dump into ceph.conf. CephConfigPath The path where the Ceph Cluster configuration files are stored on the host. The default value is /var/lib/tripleo-config/ceph . CephDashboardAdminPassword Admin password for the dashboard component. CephDashboardAdminRO Parameter used to set a read-only admin user. The default value is true . CephDashboardAdminUser Admin user for the dashboard component. The default value is admin . CephDashboardPort Parameter that defines the ceph dashboard port. The default value is 8444 . CephEnableDashboard Parameter used to trigger the dashboard deployment. The default value is false . CephExternalMonHost List of externally managed Ceph Mon Host IPs. Only used for external Ceph deployments. CephExternalMultiConfig List of maps describing extra overrides which will be applied when configuring extra external Ceph clusters. If this list is non-empty, ceph-ansible will run an extra count(list) times using the same parameters as the first run except each parameter within each map will override the defaults. If the following were used, the second run would configure the overcloud to also use the ceph2 cluster with all the parameters except /etc/ceph/ceph2.conf would have a mon_host entry containing the value of external_cluster_mon_ips below, and not the default CephExternalMonHost. Subsequent ceph-ansible runs are restricted to just ceph clients. CephExternalMultiConfig may not be used to deploy additional internal Ceph clusters within one OpenStack Orchestration (heat) stack. The map for each list should contain not tripleo-heat-template parameters but ceph-ansible parameters. - cluster: ceph2 fsid: e2cba068-5f14-4b0f-b047-acf375c0004a external_cluster_mon_ips: 172.18.0.5,172.18.0.6,172.18.0.7 keys: - name: "client.openstack" caps: mgr: "allow *" mon: "profile rbd" osd: "osd: profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images" key: "AQCwmeRcAAAAABAA6SQU/bGqFjlfLro5KxrB1Q==" mode: "0600" dashboard_enabled: false. CephExtraKeys List of maps describing extra keys which will be created on the deployed Ceph cluster. Uses ceph-ansible /library/ceph_key.py ansible module. Each item in the list must be in the following example format - name: "client.glance" caps: mgr: "allow *" mon: "profile rbd" osd: "profile rbd pool=images" key: "AQBRgQ9eAAAAABAAv84zEilJYZPNuJ0Iwn9Ndg==" mode: "0600". CephGrafanaAdminPassword Admin password for grafana component. CephIPv6 Enables Ceph daemons to bind to IPv6 addresses. The default value is False . CephManilaClientKey The Ceph client key. Can be created with: ceph-authtool --gen-print-key CephMsgrSecureMode Enable Ceph msgr2 secure mode to enable on-wire encryption between Ceph daemons and also between Ceph clients and daemons. The default value is false . CephOsdPercentageMin The minimum percentage of Ceph OSDs which must be running and in the Ceph cluster, according to ceph osd stat, for the deployment not to fail. Used to catch deployment errors early. Set this value to 0 to disable this check. Deprecated in Wallaby because of the move from ceph-ansible to cephadm; the later only brings up OSDs out of band and deployment does not block while waiting for them to come up, thus we cannot do this anymore. The default value is 0 . CephPoolDefaultPgNum Default placement group size to use for the RBD pools. The default value is 16 . CephPoolDefaultSize Default minimum replication for RBD copies. The default value is 3 . CephPools Override settings for one of the predefined pools or to create additional ones. Example: { "volumes": { "size": 5, "pg_num": 128, "pgp_num": 128 } } CephRbdMirrorConfigure Perform mirror configuration between local and remote pool. The default value is true . CephRbdMirrorCopyAdminKey Copy the admin key to all nodes. The default value is false . CephRbdMirrorPool Name of the local pool to mirror to remote cluster. CephRbdMirrorRemoteCluster The name given to the remote Ceph cluster from the local cluster. Keys reside in the /etc/ceph directory. The default value is not-ceph . CephRbdMirrorRemoteUser The rbd-mirror daemon needs a user to authenticate with the remote cluster. By default, this key should be available under /etc/ceph/<remote_cluster>.client.<remote_user>.keyring. CephRgwCertificateKeySize Override the private key size used when creating the certificate for this service. CephRgwClientName The client name for the RADOSGW service." The default value is radosgw . CephRgwKey The cephx key for the RADOSGW client. Can be created with ceph-authtool --gen-print-key. CephValidationDelay Interval (in seconds) in between validation checks. The default value is 30 . CephValidationRetries Number of retry attempts for Ceph validation. The default value is 40 . CertificateKeySize Specifies the private key size used when creating the certificate. The default value is 2048 . CinderBackupBackend The short name of the OpenStack Block Storage (cinder) Backup backend to use. The default value is swift . CinderBackupRbdPoolName Pool to use if Block Storage (cinder) Backup is enabled. The default value is backups . CinderEnableRbdBackend Whether to enable or not the Rbd backend for OpenStack Block Storage (cinder). The default value is false . CinderRbdExtraPools List of extra Ceph pools for use with RBD backends for OpenStack Block Storage (cinder). An extra OpenStack Block Storage (cinder) RBD backend driver is created for each pool in the list. This is in addition to the standard RBD backend driver associated with the CinderRbdPoolName. CinderRbdPoolName Pool to use for Block Storage (cinder) service. The default value is volumes . DeploymentServerBlacklist List of server hostnames to blocklist from any triggered deployments. GlanceBackend The short name of the OpenStack Image Storage (glance) backend to use. Set to rbd to use Ceph Storage.` The default value is swift . GlanceMultistoreConfig Dictionary of settings when configuring additional glance backends. The hash key is the backend ID, and the value is a dictionary of parameter values unique to that backend. Multiple rbd and cinder backends are allowed, but file and swift backends are limited to one each. Example: # Default glance store is rbd. GlanceBackend: rbd GlanceStoreDescription: Default rbd store # GlanceMultistoreConfig specifies a second rbd backend, plus a cinder # backend. GlanceMultistoreConfig: rbd2_store: GlanceBackend: rbd GlanceStoreDescription: Second rbd store CephClusterName: ceph2 # Override CephClientUserName if this cluster uses a different # client name. CephClientUserName: client2 cinder1_store: GlanceBackend: cinder GlanceCinderVolumeType: volume-type-1 GlanceStoreDescription: First cinder store cinder2_store: GlanceBackend: cinder GlanceCinderVolumeType: volume-type-2 GlanceStoreDescription: Seconde cinder store . GlanceRbdPoolName Pool to use for Image Storage (glance) service. The default value is images . GnocchiBackend The short name of the OpenStack Telemetry Metrics (gnocchi) backend to use. Should be one of swift, rbd, file or s3. The default value is swift . GnocchiRbdPoolName Pool to use for Telemetry storage. The default value is metrics . LocalCephAnsibleFetchDirectoryBackup Filesystem path on undercloud to persist a copy of the data from the ceph-ansible fetch directory. Used as an alternative to backing up the fetch_directory in Swift. Path must be writable and readable by the user running ansible from config-download, e.g. the mistral user in the mistral-executor container is able to read/write to /var/lib/mistral/ceph_fetch. ManilaCephFSCephFSAuthId The CephFS user ID for Shared Filesystem Service (manila). The default value is manila . ManilaCephFSDataPoolName Pool to use for file share storage. The default value is manila_data . ManilaCephFSMetadataPoolName Pool to use for file share metadata storage. The default value is manila_metadata . ManilaCephFSShareBackendName Backend name of the CephFS share for file share storage. The default value is cephfs . NodeExporterContainerImage Ceph NodeExporter container image. NovaEnableRbdBackend Whether to enable the Ceph backend for Compute (nova). The default value is false . NovaRbdPoolName Pool to use for Compute storage. The default value is vms .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/overcloud_parameters/ref_ceph-storage-parameters_overcloud_parameters
8.18. conman
8.18. conman 8.18.1. RHBA-2013:1677 - conman bug fix and enhancement update Updated conman packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. ConMan is a serial console management program designed to support a large number of console devices and simultaneous users. ConMan currently supports local serial devices and remote terminal servers. Note The conman packages have been upgraded to upstream version 0.2.7, which provides a number of bug fixes and enhancements over the version. With this update, support for the ipmiopts directive in the conman.conf configuration file has been included. (BZ# 951698 ) Bug Fix BZ# 891938 Previously, the length range of timezone strings was not sufficient to process all known timezone codes. As a consequence, the conmand daemon failed to start if the timezone name consisted of five or more characters. The maximum string length has been set to 32, and conmand now always starts as expected. Users of conman are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/conman
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/scaling_storage/providing-feedback-on-red-hat-documentation_rhodf
Nodes
Nodes OpenShift Dedicated 4 OpenShift Dedicated Nodes Red Hat OpenShift Documentation Team
[ "kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi", "oc project <project-name>", "oc get pods", "oc get pods", "NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>", "oc adm top pods", "oc adm top pods -n openshift-console", "NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi", "oc adm top pod --selector=''", "oc adm top pod --selector='name=my-pod'", "oc logs -f <pod_name> -c <container_name>", "oc logs ruby-58cd97df55-mww7r", "oc logs -f ruby-57f7f4855b-znl92 -c ruby", "oc logs <object_type>/<resource_name> 1", "oc logs deployment/ruby", "{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }", "oc create -f <file_or_dir_path>", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB", "apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com", "oc create sa <service_account_name> -n <your_namespace>", "apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3", "oc apply -f service-account-token-secret.yaml", "oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1", "ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA", "curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2", "apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1", "kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376", "oc create -f <file-name>.yaml", "oc get secrets", "NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m", "oc describe secret my-cert", "Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes", "apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-", "kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2", "oc create configmap <configmap_name> [options]", "oc create configmap game-config --from-file=example-files/", "oc describe configmaps game-config", "Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes", "cat example-files/game.properties", "enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30", "cat example-files/ui.properties", "color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice", "oc create configmap game-config --from-file=example-files/", "oc get configmaps game-config -o yaml", "apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985", "oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties", "cat example-files/game.properties", "enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30", "cat example-files/ui.properties", "color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice", "oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties", "oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties", "oc get configmaps game-config-2 -o yaml", "apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985", "oc get configmaps game-config-3 -o yaml", "apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985", "oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm", "oc get configmaps special-config -o yaml", "apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985", "apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4", "apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "SPECIAL_LEVEL_KEY=very log_level=INFO", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "very charm", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never", "very", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never", "very", "oc get priorityclasses", "NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s", "apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent priorityClassName: system-cluster-critical 1", "oc create -f <file-name>.yaml", "oc describe pod router-default-66d5cf9464-7pwkc", "kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464", "apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true", "kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1", "apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node", "oc delete crd scaledobjects.keda.k8s.io", "oc delete crd triggerauthentications.keda.k8s.io", "oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem", "oc get all -n keda", "NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m", "kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10", "oc project <project_name> 1", "oc create serviceaccount thanos 1", "apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token", "oc create -f <file_name>.yaml", "oc describe serviceaccount thanos 1", "Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none>", "apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt", "oc create -f <file-name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch", "oc create -f <file-name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5", "oc create -f <file-name>.yaml", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7", "apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password", "kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3", "apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD", "oc create -f <filename>.yaml", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2", "oc apply -f <filename>", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"", "oc edit ScaledObject scaledobject", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"", "oc edit ScaledObject scaledobject", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0", "kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"", "get pod -n keda", "NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s", "oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1", "oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.28\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}", "oc rsh pod/keda-metrics-apiserver-<hash> -n keda", "oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n keda", "sh-4.4USD cd /var/audit-policy/", "sh-4.4USD ls", "log-2023.02.17-14:50 policy.yaml", "sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1", "sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}", "oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}", "└── keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication", "oc create -f <filename>.yaml", "oc get scaledobject <scaled_object_name>", "NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s", "oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh", "oc get clusterrole | grep keda.sh", "oc delete clusterrole.keda.sh-v1alpha1-admin", "oc get clusterrolebinding | grep keda.sh", "oc delete clusterrolebinding.keda.sh-v1alpha1-admin", "oc delete project keda", "oc delete operator/openshift-custom-metrics-autoscaler-operator.keda", "apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: security-s1-east spec: affinity: 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: security-s2-east spec: affinity: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: team4a spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>", "apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #", "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #", "oc create -f <file-name>.yaml", "oc label node node1 zone=us", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #", "cat pod-s1.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #", "oc get pod -o wide", "NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1", "oc label node node1 zone=emea", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #", "cat pod-s1.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #", "oc describe pod pod-s1", "Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #", "apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #", "apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #", "apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>", "apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #", "apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #", "apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>", "apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #", "oc describe pod router-default-66d5cf9464-7pwkc", "kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464", "apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true", "kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1", "apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node", "apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "kind: Pod apiVersion: v1 metadata: name: hello-node-6fbccf8d9-9tmzr # spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name #", "oc patch namespace myproject -p '{\"metadata\": {\"annotations\": {\"openshift.io/node-selector\": \"\"}}}'", "apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' #", "apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 #", "oc create -f daemonset.yaml", "oc get pods", "hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m", "oc describe pod/hello-daemonset-cx6md|grep Node", "Node: openshift-node01.hostname.com/10.14.20.134", "oc describe pod/hello-daemonset-e3md9|grep Node", "Node: openshift-node02.hostname.com/10.14.20.137", "apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #", "apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #", "oc create -f <file-name>.yaml", "oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'", "apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: \"*/1 * * * *\" 1 concurrencyPolicy: \"Replace\" 2 startingDeadlineSeconds: 200 3 suspend: true 4 successfulJobsHistoryLimit: 3 5 failedJobsHistoryLimit: 1 6 jobTemplate: 7 spec: template: metadata: labels: 8 parent: \"cronjobpi\" spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 9", "oc create -f <file-name>.yaml", "oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)'", "oc get nodes", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com Ready worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.31.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.31.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.31.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev", "oc get node <node>", "oc get node node1.example.com", "NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.31.3", "oc describe node <node>", "oc describe node node1.example.com", "Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.31.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.31.3 Kube-Proxy Version: v1.31.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ovn-kubernetes ovnkube-node-t4dsn 80m (0%) 0 (0%) 1630Mi (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #", "oc get pod --selector=<nodeSelector>", "oc get pod --selector=kubernetes.io/os", "oc get pod -l=<nodeSelector>", "oc get pod -l kubernetes.io/os=linux", "oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%", "oc adm top node --selector=''", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "USD(nproc) X 1/2 MiB", "for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1", "curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'", "apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f myapp.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s", "kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376", "oc create -f myservice.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s", "kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377", "oc create -f mydb.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m", "oc set volume <object_selection> <operation> <mandatory_parameters> <options>", "oc set volume <object_type>/<name> [options]", "oc set volume pod/p1", "oc set volume dc --all --name=v1", "oc set volume <object_type>/<name> --add [options]", "oc set volume dc/registry --add", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP", "oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data", "kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data", "oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data", "oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'", "oc set volume <object_type>/<name> --add --overwrite [options]", "oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1", "kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data", "oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt", "oc set volume <object_type>/<name> --remove [options]", "oc set volume dc/d1 --remove --name=v1", "oc set volume dc/d1 --remove --name=v1 --containers=c1", "oc set volume rc/r1 --remove --confirm", "oc rsh <pod>", "sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3", "apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data", "echo -n \"admin\" | base64", "YWRtaW4=", "echo -n \"1f2d1e2e67df\" | base64", "MWYyZDFlMmU2N2Rm", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=", "oc create -f <secrets-filename>", "oc create -f secret.yaml", "secret \"mysecret\" created", "oc get secret <secret-name>", "oc get secret mysecret", "NAME TYPE DATA AGE mysecret Opaque 2 17h", "oc get secret <secret-name> -o yaml", "oc get secret mysecret -o yaml", "apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque", "kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1", "oc create -f <your_yaml_file>.yaml", "oc create -f secret-pod.yaml", "pod \"test-projected-volume\" created", "oc get pod <name>", "oc get pod test-projected-volume", "NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s", "oc exec -it <pod> <command>", "oc exec -it test-projected-volume -- /bin/sh", "/ # ls", "bin home root tmp dev proc run usr etc projected-volume sys var", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never", "oc create -f volume-pod.yaml", "oc logs -p dapi-volume-test-pod", "cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory", "oc create -f pod.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory", "oc create -f volume-pod.yaml", "apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth", "oc create -f secret.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue", "oc create -f configmap.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "oc rsync <source> <destination> [-c <container>]", "<pod name>:<dir>", "oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>", "oc rsync /home/user/source devpod1234:/src -c user-container", "oc rsync devpod1234:/src /home/user/source", "oc rsync devpod1234:/src/status.txt /home/user/", "rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>", "export RSYNC_RSH='oc rsh'", "rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>", "oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]", "oc exec mypod date", "Thu Apr 9 02:21:53 UTC 2015", "/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>", "/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date", "oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]", "oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]", "oc port-forward <pod> 5000 6000", "Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000", "oc port-forward <pod> 8888:5000", "Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000", "oc port-forward <pod> :5000", "Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000", "oc port-forward <pod> 0:5000", "/proxy/nodes/<node_name>/portForward/<namespace>/<pod>", "/proxy/nodes/node123.openshift.com/portForward/myns/mypod", "oc get events [-n <project>] 1", "oc get events -n openshift-config", "LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"ovn-kubernetes\": cannot set \"ovn-kubernetes\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal #", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc create -f pod-spec.yaml", "podman login registry.redhat.io", "podman pull registry.redhat.io/openshift4/ose-cluster-capacity", "podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]", "oc create -f <file_name>.yaml", "oc create sa cluster-capacity-sa", "oc create sa cluster-capacity-sa -n default", "oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc create -f pod.yaml", "oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml", "apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap", "oc create -f cluster-capacity-job.yaml", "oc logs jobs/cluster-capacity-job", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"", "oc create -f <limit_range_file> -n <project> 1", "oc get limits -n demoproject", "NAME CREATED AT resource-limits 2020-07-15T17:14:23Z", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -", "oc delete limits <limit_name>", "-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.", "JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"", "apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc rsh test", "env | grep MEMORY | sort", "MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184", "oc rsh test", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 0", "sed -e '' </dev/zero", "Killed", "echo USD?", "137", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 1", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m", "oc get pod test -o yaml", "status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted", "oc edit namespace/<project_name>", "apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/nodes/index
5.2.2. Enable root login over SSH
5.2.2. Enable root login over SSH Now that virt-v2v is installed, the conversion server must be prepared to accept P2V client connections. The P2V client connects to the conversion server as root using SSH, so root login over SSH must be allowed on the conversion server. Enable root login over SSH: As root, edit the sshd_config file in /etc/ssh/sshd_config : Add a line in the Authentication section of the file that says PermitRootLogin yes . This line may already exist and be commented out with a "#". In this case, remove the "#". Save the updated /etc/ssh/sshd_config file. Restart the SSH server: You can now connect to the conversion server as root over SSH.
[ "nano /etc/ssh/sshd_config", "Authentication: #LoginGraceTime 2m PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #MaxSessions 10", "service sshd restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/preparation_before_the_p2v_migration-enable_root_login_over_ssh
Chapter 4. Checking policy compliance
Chapter 4. Checking policy compliance You can use the roxctl CLI to check deployment YAML files and images for policy compliance. 4.1. Prerequisites You have configured the ROX_ENDPOINT environment variable using the following command: USD export ROX_ENDPOINT= <host:port> 1 1 The host and port information that you want to store in the ROX_ENDPOINT environment variable. 4.2. Configuring output format When you check policy compliance by using the roxctl deployment check or roxctl image check commands, you can specify the output format by using the -o option to the command and specifying the format as json , table , csv , or junit . This option determines how the output of a command is displayed in the terminal. For example, the following command checks a deployment and then displays the result in csv format: USD roxctl deployment check --file = <yaml_filename> -o csv Note When you do not specify the -o option for the output format, the following default behavior is used: The format for the deployment check and the image check commands is table . The default output format for the image scan command is json . This is the old JSON format output for compatibility with older versions of the CLI. To get the output in the new JSON format, specify the option with format, as -o json . Use the old JSON format output when gathering data for troubleshooting purposes. Different options are available to configure the output. The following table lists the options and the format in which they are available. Option Description Formats --compact-output Use this option to display the JSON output in a compact format. json --headers Use this option to specify custom headers. table and csv --no-header Use this option to omit the header row from the output. table and csv --row-jsonpath-expressions Use this option to specify GJSON paths to select specific items from the output. For example, to get the Policy name and Severity for a deployment check, use the following command: USD roxctl deployment check --file= <yaml_filename> \ -o table --headers POLICY-NAME,SEVERITY \ --row-jsonpath-expressions="{results. .violatedPolicies. .name,results. .violatedPolicies. .severity}" table and csv --merge-output Use this options to merge table cells that have the same value. table headers-as-comment Use this option to include the header row as a comment in the output. csv --junit-suite-name Use this option to specify the name of the JUnit test suite. junit 4.3. Checking deployment YAML files Procedure Run the following command to check the build-time and deploy-time violations of your security policies in YAML deployment files: USD roxctl deployment check --file=<yaml_filename> \ 1 --namespace=<cluster_namespace> \ 2 --cluster=<cluster_name_or_id> \ 3 --verbose 4 1 For the <yaml_filename> , specify the YAML file with one or more deployments to send to Central for policy evaluation. You can also specify multiple YAML files to send to Central for policy evaluation by using the --file flag, for example --file=<yaml_filename1> , --file=<yaml_filename2> , and so on. 2 For the <cluster_namespace> , specify a namespace to enhance deployments with context information such as network policies, role-based access controls (RBACs) and services for deployments that do not have a namespace in their specification. The namespace defined in the specification is not changed. The default value is default . 3 For the <cluster_name_or_id> , specify the cluster name or ID that you want to use as the context for the evaluation to enable extended deployments with cluster-specific information. 4 By enabling the --verbose flag, you receive additional information for each deployment during the policy check. The extended information includes the RBAC permission level and a comprehensive list of network policies that is applied. Note You can see the additional information for each deployment in your JSON output, regardless of whether you enable the --verbose flag or not. The format is defined in the API reference. To cause Red Hat Advanced Cluster Security for Kubernetes (RHACS) to re-pull image metadata and image scan results from the associated registry and scanner, add the --force option. Note To check specific image scan results, you must have a token with both read and write permissions for the Image resource. The default Continuous Integration system role already has the required permissions. This command validates the following items: Configuration options in a YAML file, such as resource limits or privilege options Aspects of the images used in a YAML file, such as components or vulnerabilities 4.4. Checking images Procedure Run the following command to check the build-time violations of your security policies in images: USD roxctl image check --image= <image_name> The format is defined in the API reference. To cause Red Hat Advanced Cluster Security for Kubernetes (RHACS) to re-pull image metadata and image scan results from the associated registry and scanner, add the --force option. Note To check specific image scan results, you must have a token with both read and write permissions for the Image resource. The default Continuous Integration system role already has the required permissions. Additional resources roxctl image 4.5. Checking image scan results You can also check the scan results for specific images. Procedure Run the following command to return the components and vulnerabilities found in the image in JSON format: USD roxctl image scan --image <image_name> The format is defined in the API reference. To cause Red Hat Advanced Cluster Security for Kubernetes (RHACS) to re-pull image metadata and image scan results from the associated registry and scanner, add the --force option. Note To check specific image scan results, you must have a token with both read and write permissions for the Image resource. The default Continuous Integration system role already has the required permissions. Additional resources roxctl image
[ "export ROX_ENDPOINT= <host:port> 1", "roxctl deployment check --file = <yaml_filename> -o csv", "roxctl deployment check --file= <yaml_filename> -o table --headers POLICY-NAME,SEVERITY --row-jsonpath-expressions=\"{results. .violatedPolicies. .name,results. .violatedPolicies. .severity}\"", "roxctl deployment check --file=<yaml_filename> \\ 1 --namespace=<cluster_namespace> \\ 2 --cluster=<cluster_name_or_id> \\ 3 --verbose 4", "roxctl image check --image= <image_name>", "roxctl image scan --image <image_name>" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/roxctl_cli/checking-policy-compliance-1
Chapter 9. Accessing an overcloud deployed with the director Operator
Chapter 9. Accessing an overcloud deployed with the director Operator After you deploy the overcloud with the director Operator, you can access it and run commands with the openstack client tool. The main access point for the overcloud is through the OpenStackClient pod that the director Operator deploys as a part of the OpenStackControlPlane resource that you create. 9.1. Accessing the OpenStackClient pod The OpenStackClient pod is the main access point to run commands against the overcloud. This pod contains the client tools and authentication details that you require to perform actions on your overcloud. To access the pod from your workstation, you must use the oc command on your workstation to connect to the remote shell for the pod. Note When you access an overcloud that you deploy without the director Operator, you usually run the source ~/overcloudrc command to set environment variables to access the overcloud. You do not require this step with an overcloud that you deploy with the director Operator. Prerequisites Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. Deploy and configure an overcloud that runs in your OCP cluster. Ensure that you have installed the oc command line tool on your workstation. Procedure Access the remote shell for openstackclient : Change to the cloud-admin home directory: Run your openstack commands. For example, you can create a default network with the following command: Additional resources "Creating basic overcloud flavors" "Creating a default tenant network" "Creating a default floating IP network" "Creating a default provider network" 9.2. Accessing the overcloud dashboard Access the dashboard of an overcloud that you deploy with the director Operator with the same method as a standard overcloud. Access the IP address of the overcloud host name or public VIP, which you usually set with the PublicVirtualFixedIPs heat parameter, with a web browser and log in to the overcloud dashboard with your username and password. Prerequisites Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. Deploy and configure an overcloud that runs in your OCP cluster. Ensure that you have installed the oc command line tool on your workstation. Procedure Optional: To login as the admin user, obtain the admin password from the AdminPassword parameter in the tripleo-passwords secret: Open your web browser. Enter the host name or public VIP of the overcloud dashboard in the URL field. Log in to the dashboard with your chosen username and password.
[ "oc rsh -n openstack openstackclient", "cd /home/cloud-admin", "openstack network create default", "oc get secret tripleo-passwords -o jsonpath='{.data.tripleo-overcloud-passwords\\.yaml}' | base64 -d" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/rhosp_director_operator_for_openshift_container_platform/assembly_accessing-an-overcloud-deployed-with-the-director-operator_rhosp-director-operator
2.5.5.2. A Sample OProfile Session
2.5.5.2. A Sample OProfile Session This section shows an OProfile monitoring and data analysis session from initial configuration to final data analysis. It is only an introductory overview; for more detailed information, consult the System Administrators Guide . Use opcontrol to configure the type of data to be collected with the following command: The options used here direct opcontrol to: Direct OProfile to a copy of the currently running kernel ( --vmlinux=/boot/vmlinux-`uname -r` ) Specify that the processor's counter 0 is to be used and that the event to be monitored is the time when the CPU is executing instructions ( --ctr0-event=CPU_CLK_UNHALTED ) Specify that OProfile is to collect samples every 6000th time the specified event occurs ( --ctr0-count=6000 ) , check that the oprofile kernel module is loaded by using the lsmod command: Confirm that the OProfile file system (located in /dev/oprofile/ ) is mounted with the ls /dev/oprofile/ command: (The exact number of files varies according to processor type.) At this point, the /root/.oprofile/daemonrc file contains the settings required by the data collection software: , use opcontrol to actually start data collection with the opcontrol --start command: Verify that the oprofiled daemon is running with the command ps x | grep -i oprofiled : (The actual oprofiled command line displayed by ps is much longer; however, it has been truncated here for formatting purposes.) The system is now being monitored, with the data collected for all executables present on the system. The data is stored in the /var/lib/oprofile/samples/ directory. The files in this directory follow a somewhat unusual naming convention. Here is an example: The naming convention uses the absolute path of each file containing executable code, with the slash ( / ) characters replaced by right curly brackets ( } ), and ending with a pound sign ( # ) followed by a number (in this case, 0 .) Therefore, the file used in this example represents data collected while /usr/bin/less was running. Once data has been collected, use one of the analysis tools to display it. One nice feature of OProfile is that it is not necessary to stop data collection before performing a data analysis. However, you must wait for at least one set of samples to be written to disk, or use the opcontrol --dump command to force the samples to disk. In the following example, op_time is used to display (in reverse order -- from highest number of samples to lowest) the samples that have been collected: Using less is a good idea when producing a report interactively, as the reports can be hundreds of lines long. The example given here has been truncated for that reason. The format for this particular report is that one line is produced for each executable file for which samples were taken. Each line follows this format: Where: <sample-count> represents the number of samples collected <sample-percent> represents the percentage of all samples collected for this specific executable <unused-field> is a field that is not used <executable-name> represents the name of the file containing executable code for which samples were collected. This report (produced on a mostly-idle system) shows that nearly half of all samples were taken while the CPU was running code within the kernel itself. in line was the OProfile data collection daemon, followed by a variety of libraries and the X Window System server, XFree86 . It is worth noting that for the system running this sample session, the counter value of 6000 used represents the minimum value recommended by opcontrol --list-events . This means that -- at least for this particular system -- OProfile overhead at its highest consumes roughly 11% of the CPU.
[ "opcontrol \\ --vmlinux=/boot/vmlinux-`uname -r` \\ --ctr0-event=CPU_CLK_UNHALTED \\ --ctr0-count=6000", "Module Size Used by Not tainted oprofile 75616 1 ...", "0 buffer buffer_watershed cpu_type enable stats 1 buffer_size cpu_buffer_size dump kernel_only", "CTR_EVENT[0]=CPU_CLK_UNHALTED CTR_COUNT[0]=6000 CTR_KERNEL[0]=1 CTR_USER[0]=1 CTR_UM[0]=0 CTR_EVENT_VAL[0]=121 CTR_EVENT[1]= CTR_COUNT[1]= CTR_KERNEL[1]=1 CTR_USER[1]=1 CTR_UM[1]=0 CTR_EVENT_VAL[1]= one_enabled=1 SEPARATE_LIB_SAMPLES=0 SEPARATE_KERNEL_SAMPLES=0 VMLINUX=/boot/vmlinux-2.4.21-1.1931.2.349.2.2.entsmp", "Using log file /var/lib/oprofile/oprofiled.log Daemon started. Profiler running.", "32019 ? S 0:00 /usr/bin/oprofiled --separate-lib-samples=0 ... 32021 pts/0 S 0:00 grep -i oprofiled", "}usr}bin}less#0", "3321080 48.8021 0.0000 /boot/vmlinux-2.4.21-1.1931.2.349.2.2.entsmp 761776 11.1940 0.0000 /usr/bin/oprofiled 368933 5.4213 0.0000 /lib/tls/libc-2.3.2.so 293570 4.3139 0.0000 /usr/lib/libgobject-2.0.so.0.200.2 205231 3.0158 0.0000 /usr/lib/libgdk-x11-2.0.so.0.200.2 167575 2.4625 0.0000 /usr/lib/libglib-2.0.so.0.200.2 123095 1.8088 0.0000 /lib/libcrypto.so.0.9.7a 105677 1.5529 0.0000 /usr/X11R6/bin/XFree86 ...", "<sample-count> <sample-percent> <unused-field> <executable-name>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-resource-tools-oprofile-example
Chapter 13. Configuring Authentication
Chapter 13. Configuring Authentication Authentication is the way that a user is identified and verified to a system. The authentication process requires presenting some sort of identity and credentials, like a user name and password. The credentials are then compared to information stored in some data store on the system. In Red Hat Enterprise Linux, the Authentication Configuration Tool helps configure what kind of data store to use for user credentials, such as LDAP. For convenience and potentially part of single sign-on, Red Hat Enterprise Linux can use a central daemon to store user credentials for a number of different data stores. The System Security Services Daemon (SSSD) can interact with LDAP, Kerberos, and external applications to verify user credentials. The Authentication Configuration Tool can configure SSSD along with NIS, Winbind, and LDAP, so that authentication processing and caching can be combined. 13.1. Configuring System Authentication When a user logs into a Red Hat Enterprise Linux system, that user presents some sort of credential to establish the user identity. The system then checks those credentials against the configured authentication service. If the credentials match and the user account is active, then the user is authenticated . (Once a user is authenticated, then the information is passed to the access control service to determine what the user is permitted to do. Those are the resources the user is authorized to access.) The information to verify the user can be located on the local system or the local system can reference a user database on a remote system, such as LDAP or Kerberos. The system must have a configured list of valid account databases for it to check for user authentication. On Red Hat Enterprise Linux, the Authentication Configuration Tool has both GUI and command-line options to configure any user data stores. A local system can use a variety of different data stores for user information, including Lightweight Directory Access Protocol (LDAP), Network Information Service (NIS), and Winbind. Additionally, both LDAP and NIS data stores can use Kerberos to authenticate users. Important If a medium or high security level is set during installation or with the Security Level Configuration Tool, then the firewall prevents NIS authentication. For more information about firewalls, see the "Firewalls" section of the Security Guide . 13.1.1. Launching the Authentication Configuration Tool UI Log into the system as root. Open the System . Select the Administration menu. Select the Authentication item. Alternatively, run the system-config-authentication command. Important Any changes take effect immediately when the Authentication Configuration Tool UI is closed. There are two configuration tabs in the Authentication dialog box: Identity & Authentication , which configures the resource used as the identity store (the data repository where the user IDs and corresponding credentials are stored). Advanced Options , which allows authentication methods other than passwords or certificates, like smart cards and fingerprint. 13.1.2. Selecting the Identity Store for Authentication The Identity & Authentication tab sets how users should be authenticated. The default is to use local system authentication, meaning the users and their passwords are checked against local system accounts. A Red Hat Enterprise Linux machine can also use external resources which contain the users and credentials, including LDAP, NIS, and Winbind. Figure 13.1. Local Authentication 13.1.2.1. Configuring LDAP Authentication Either the openldap-clients package or the sssd package is used to configure an LDAP server for the user database. Both packages are installed by default. Open the Authentication Configuration Tool, as in Section 13.1.1, "Launching the Authentication Configuration Tool UI" . Select LDAP in the User Account Database drop-down menu. Set the information that is required to connect to the LDAP server. LDAP Search Base DN gives the root suffix or distinguished name (DN) for the user directory. All of the user entries used for identity/authentication will exist below this parent entry. For example, ou=people,dc=example,dc=com . This field is optional. If it is not specified, then the System Security Services Daemon (SSSD) attempts to detect the search base using the namingContexts and defaultNamingContext attributes in the LDAP server's configuration entry. LDAP Server gives the URL of the LDAP server. This usually requires both the host name and port number of the LDAP server, such as ldap://ldap.example.com:389 . Entering the secure protocol in the URL, ldaps:// , enables the Download CA Certificate button. Use TLS to encrypt connections sets whether to use Start TLS to encrypt the connections to the LDAP server. This enables a secure connection over a standard port. Selecting TLS enables the Download CA Certificate button, which retrieves the issuing CA certificate for the LDAP server from whatever certificate authority issued it. The CA certificate must be in the privacy enhanced mail (PEM) format. Important Do not select Use TLS to encrypt connections if the server URL uses a secure protocol ( ldaps ). This option uses Start TLS, which initiates a secure connection over a standard port; if a secure port is specified, then a protocol like SSL must be used instead of Start TLS. Select the authentication method. LDAP allows simple password authentication or Kerberos authentication. Using Kerberos is described in Section 13.1.2.4, "Using Kerberos with LDAP or NIS Authentication" . The LDAP password option uses PAM applications to use LDAP authentication. This option requires either a secure ( ldaps:// ) URL or the TLS option to connect to the LDAP server. 13.1.2.2. Configuring NIS Authentication Install the ypbind package. This is required for NIS services, but is not installed by default. When the ypbind service is installed, the portmap and ypbind services are started and enabled to start at boot time. Open the Authentication Configuration Tool, as in Section 13.1.1, "Launching the Authentication Configuration Tool UI" . Select NIS in the User Account Database drop-down menu. Set the information to connect to the NIS server, meaning the NIS domain name and the server host name. If the NIS server is not specified, the authconfig daemon scans for the NIS server. Select the authentication method. NIS allows simple password authentication or Kerberos authentication. Using Kerberos is described in Section 13.1.2.4, "Using Kerberos with LDAP or NIS Authentication" . For more information about NIS, see the "Securing NIS" section of the Security Guide . 13.1.2.3. Configuring Winbind Authentication Install the samba-winbind package. This is required for Windows integration features in Samba services, but is not installed by default. Open the Authentication Configuration Tool, as in Section 13.1.1, "Launching the Authentication Configuration Tool UI" . Select Winbind in the User Account Database drop-down menu. Set the information that is required to connect to the Microsoft Active Directory domain controller. Winbind Domain gives the Windows domain to connect to. This should be in the Windows 2000 format, such as DOMAIN . Security Model sets the security model to use for Samba clients. authconfig supports four types of security models: ads configures Samba to act as a domain member in an Active Directory Server realm. To operate in this mode, the krb5-server package must be installed and Kerberos must be configured properly. Also, when joining to the Active Directory Server using the command line, the following command must be used: net ads join domain has Samba validate the user name/password by authenticating it through a Windows primary or backup domain controller, much like a Windows server. server has a local Samba server validate the user name/password by authenticating it through another server, such as a Windows server. If the server authentication attempt fails, the system then attempts to authenticate using user mode. user requires a client to log in with a valid user name and password. This mode does support encrypted passwords. The user name format must be domain\user , such as EXAMPLE\jsmith . Note When verifying that a given user exists in the Windows domain, always use Windows 2000-style formats and escape the backslash (\) character. For example: This is the default option. Winbind ADS Realm gives the Active Directory realm that the Samba server will join. This is only used with the ads security model. Winbind Domain Controllers gives the domain controller to use. For more information about domain controllers, see Section 21.1.6.3, "Domain Controller" . Template Shell sets which login shell to use for Windows user account settings. Allow offline login allows authentication information to be stored in a local cache. The cache is referenced when a user attempts to authenticate to system resources while the system is offline. For more information about the Winbind service, see Section 21.1.2, "Samba Daemons and Related Services" . For additional information about configuring Winbind and troubleshooting tips, see the Knowledgebase on the Red Hat Customer Portal . Also, the Red Hat Access Labs page includes the Winbind Mapper utility that generates a part of the smb.conf file to help you connect a Red Hat Enterprise Linux to an Active Directory. 13.1.2.4. Using Kerberos with LDAP or NIS Authentication Both LDAP and NIS authentication stores support Kerberos authentication methods. Using Kerberos has a couple of benefits: It uses a security layer for communication while still allowing connections over standard ports. It automatically uses credentials caching with SSSD, which allows offline logins. Using Kerberos authentication requires the krb5-libs and krb5-workstation packages. The Kerberos password option from the Authentication Method drop-down menu automatically opens the fields required to connect to the Kerberos realm. Figure 13.2. Kerberos Fields Realm gives the name for the realm for the Kerberos server. The realm is the network that uses Kerberos, composed of one or more key distribution centers (KDC) and a potentially large number of clients. KDCs gives a comma-separated list of servers that issue Kerberos tickets. Admin Servers gives a list of administration servers running the kadmind process in the realm. Optionally, use DNS to resolve server host name and to find additional KDCs within the realm. For more information about Kerberos, see section "Using Kerberos" of the Red Hat Enterprise Linux 6 Managing Single Sign-On and Smart Cards guide. 13.1.3. Configuring Alternative Authentication Features The Authentication Configuration Tool also configures settings related to authentication behavior, apart from the identity store. This includes entirely different authentication methods (fingerprint scans and smart cards) or local authentication rules. These alternative authentication options are configured in the Advanced Options tab. Figure 13.3. Advanced Options 13.1.3.1. Using Fingerprint Authentication When there is appropriate hardware available, the Enable fingerprint reader support option allows fingerprint scans to be used to authenticate local users in addition to other credentials. 13.1.3.2. Setting Local Authentication Parameters There are two options in the Local Authentication Options area which define authentication behavior on the local system: Enable local access control instructs the /etc/security/access.conf file to check for local user authorization rules. Password Hashing Algorithm sets the hashing algorithm to use to encrypt locally-stored passwords. 13.1.3.3. Enabling Smart Card Authentication When there are appropriate smart card readers available, a system can accept smart cards (or tokens ) instead of other user credentials to authenticate. Once the Enable smart card support option is selected, then the behaviors of smart card authentication can be defined: Card Removal Action tells the system how to respond when the card is removed from the card reader during an active session. A system can either ignore the removal and allow the user to access resources as normal, or a system can immediately lock until the smart card is supplied. Require smart card login sets whether a smart card is required for logins or allowed for logins. When this option is selected, all other methods of authentication are immediately blocked. Warning Do not select this option until you have successfully authenticated to the system using a smart card. Using smart cards requires the pam_pkcs11 package. 13.1.3.4. Creating User Home Directories There is an option ( Create home directories on the first login ) to create a home directory automatically the first time that a user logs in. This option is beneficial with accounts that are managed centrally, such as with LDAP. However, this option should not be selected if a system like automount is used to manage user home directories. 13.1.4. Configuring Authentication from the Command Line The authconfig command-line tool updates all of the configuration files and services required for system authentication, according to the settings passed to the script. Along with allowing all of the identity and authentication configuration options that can be set through the UI, the authconfig tool can also be used to create backup and kickstart files. For a complete list of authconfig options, check the help output and the man page. 13.1.4.1. Tips for Using authconfig There are some things to remember when running authconfig : With every command, use either the --update or --test option. One of those options is required for the command to run successfully. Using --update writes the configuration changes. --test prints the changes to stdout but does not apply the changes to the configuration. Each enable option has a corresponding disable option. 13.1.4.2. Configuring LDAP User Stores To use an LDAP identity store, use the --enableldap . To use LDAP as the authentication source, use --enableldapauth and then the requisite connection information, like the LDAP server name, base DN for the user suffix, and (optionally) whether to use TLS. The authconfig command also has options to enable or disable RFC 2307bis schema for user entries, which is not possible through the Authentication Configuration UI. Be sure to use the full LDAP URL, including the protocol ( ldap or ldaps ) and the port number. Do not use a secure LDAP URL ( ldaps ) with the --enableldaptls option. Instead of using --ldapauth for LDAP password authentication, it is possible to use Kerberos with the LDAP user store. These options are described in Section 13.1.4.5, "Configuring Kerberos Authentication" . 13.1.4.3. Configuring NIS User Stores To use a NIS identity store, use the --enablenis . This automatically uses NIS authentication, unless the Kerberos parameters are explicitly set, so it uses Kerberos authentication ( Section 13.1.4.5, "Configuring Kerberos Authentication" ). The only parameters are to identify the NIS server and NIS domain; if these are not used, then the authconfig service scans the network for NIS servers. 13.1.4.4. Configuring Winbind User Stores Windows domains have several different security models, and the security model used in the domain determines the authentication configuration for the local system. For user and server security models, the Winbind configuration requires only the domain (or workgroup) name and the domain controller host names. Note The user name format must be domain\user , such as EXAMPLE\jsmith . When verifying that a given user exists in the Windows domain, always use Windows 2000-style formats and escape the backslash (\) character. For example: For ads and domain security models, the Winbind configuration allows additional configuration for the template shell and realm (ads only). For example: There are a lot of other options for configuring Windows-based authentication and the information for Windows user accounts, such as name formats, whether to require the domain name with the user name, and UID ranges. These options are listed in the authconfig help. 13.1.4.5. Configuring Kerberos Authentication Both LDAP and NIS allow Kerberos authentication to be used in place of their native authentication mechanisms. At a minimum, using Kerberos authentication requires specifying the realm, the KDC, and the administrative server. There are also options to use DNS to resolve client names and to find additional admin servers. 13.1.4.6. Configuring Local Authentication Settings The Authentication Configuration Tool can also control some user settings that relate to security, such as creating home directories, setting password hash algorithms, and authorization. These settings are done independently of identity/user store settings. For example, to create user home directories: To set or change the hash algorithm used to encrypt user passwords: 13.1.4.7. Configuring Fingerprint Authentication There is one option to enable support for fingerprint readers. This option can be used alone or in conjunction with other authconfig settings, like LDAP user stores. 13.1.4.8. Configuring Smart Card Authentication All that is required to use smart cards with a system is to set the --enablesmartcard option: There are other configuration options for smart cards, such as changing the default smart card module, setting the behavior of the system when the smart card is removed, and requiring smart cards for login. For example, this command instructs the system to lock out a user immediately if the smart card is removed (a setting of 1 ignores it if the smart card is removed): Once smart card authentication has been successfully configured and tested, then the system can be configured to require smart card authentication for users rather than simple password-based authentication. Warning Do not use the --enablerequiresmartcard option until you have successfully authenticated to the system using a smart card. Otherwise, users may be unable to log into the system. 13.1.4.9. Managing Kickstart and Configuration Files The --update option updates all of the configuration files with the configuration changes. There are a couple of alternative options with slightly different behavior: --kickstart writes the updated configuration to a kickstart file. --test prints the full configuration, with changes, to stdout but does not edit any configuration files. Additionally, authconfig can be used to back up and restore configurations. All archives are saved to a unique subdirectory in the /var/lib/authconfig/ directory. For example, the --savebackup option gives the backup directory as 2011-07-01 : This backs up all of the authentication configuration files beneath the /var/lib/authconfig/backup-2011-07-01 directory. Any of the saved backups can be used to restore the configuration using the --restorebackup option, giving the name of the manually-saved configuration: Additionally, authconfig automatically makes a backup of the configuration before it applies any changes (with the --update option). The configuration can be restored from the most recent automatic backup, without having to specify the exact backup, using the --restorelastbackup option. 13.1.5. Using Custom Home Directories If LDAP users have home directories that are not in /home and the system is configured to create home directories the first time users log in, then these directories are created with the wrong permissions. Apply the correct SELinux context and permissions from the /home directory to the home directory that is created on the local system. For example: Install the oddjob-mkhomedir package on the system. This package provides the pam_oddjob_mkhomedir.so library, which the Authentication Configuration Tool uses to create home directories. The pam_oddjob_mkhomedir.so library, unlike the default pam_mkhomedir.so library, can create SELinux labels. The Authentication Configuration Tool automatically uses the pam_oddjob_mkhomedir.so library if it is available. Otherwise, it will default to using pam_mkhomedir.so . Make sure the oddjobd service is running. Re-run the Authentication Configuration Tool and enable home directories, as in Section 13.1.3, "Configuring Alternative Authentication Features" . If home directories were created before the home directory configuration was changed, then correct the permissions and SELinux contexts. For example:
[ "~]# yum install ypbind", "~]# yum install samba-winbind", "~]# getent passwd domain\\\\user DOMAIN\\user:*:16777216:16777216:Name Surname:/home/DOMAIN/user:/bin/bash", "authconfig --enableldap --enableldapauth --ldapserver=ldap://ldap.example.com:389,ldap://ldap2.example.com:389 --ldapbasedn=\"ou=people,dc=example,dc=com\" --enableldaptls --ldaploadcacert=https://ca.server.example.com/caCert.crt --update", "authconfig --enablenis --nisdomain=EXAMPLE --nisserver=nis.example.com --update", "authconfig --enablewinbind --enablewinbindauth --smbsecurity=user|server --enablewinbindoffline --smbservers=ad.example.com --smbworkgroup=EXAMPLE --update", "~]# getent passwd domain\\\\user DOMAIN\\user:*:16777216:16777216:Name Surname:/home/DOMAIN/user:/bin/bash", "authconfig --enablewinbind --enablewinbindauth --smbsecurity ads --enablewinbindoffline --smbservers=ad.example.com --smbworkgroup=EXAMPLE --smbrealm EXAMPLE.COM --winbindtemplateshell=/bin/sh --update", "authconfig NIS or LDAP options --enablekrb5 --krb5realm EXAMPLE --krb5kdc kdc.example.com:88,server.example.com:88 --krb5adminserver server.example.com:749 --enablekrb5kdcdns --enablekrb5realmdns --update", "authconfig --enablemkhomedir --update", "authconfig --passalgo=sha512 --update", "~]# authconfig --enablefingerprint --update", "~]# authconfig --enablesmartcard --update", "~]# authconfig --enablesmartcard --smartcardaction=0 --update", "~]# authconfig --enablerequiresmartcard --update", "~]# authconfig --savebackup=2011-07-01", "~]# authconfig --restorebackup=2011-07-01", "~]# semanage fcontext -a -e /home /home/locale", "~]# semanage fcontext -a -e /home /home/locale restorecon -R -v /home/locale" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-configuring_authentication
Chapter 6. Mirroring Ceph block devices
Chapter 6. Mirroring Ceph block devices As a storage administrator, you can add another layer of redundancy to Ceph block devices by mirroring data images between Red Hat Ceph Storage clusters. Understanding and using Ceph block device mirroring can provide you protection against data loss, such as a site failure. There are two configurations for mirroring Ceph block devices, one-way mirroring or two-way mirroring, and you can configure mirroring on pools and individual images. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Network connectivity between the two storage clusters. Access to a Ceph client node for each Red Hat Ceph Storage cluster. A CephX user with administrator-level capabilities. 6.1. Ceph block device mirroring RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph block device images between two or more Ceph storage clusters. By locating a Ceph storage cluster in different geographic locations, RBD Mirroring can help you recover from a site disaster. Journal-based Ceph block device mirroring ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening. RBD mirroring uses exclusive locks and the journaling feature to record all modifications to an image in the order in which they occur. This ensures that a crash-consistent mirror of an image is available. Important The CRUSH hierarchies supporting primary and secondary pools that mirror block device images must have the same capacity and performance characteristics, and must have adequate bandwidth to ensure mirroring without excess latency. For example, if you have X MB/s average write throughput to images in the primary storage cluster, the network must support N * X throughput in the network connection to the secondary site plus a safety factor of Y% to mirror N images. The rbd-mirror daemon is responsible for synchronizing images from one Ceph storage cluster to another Ceph storage cluster by pulling changes from the remote primary image and writes those changes to the local, non-primary image. The rbd-mirror daemon can run either on a single Ceph storage cluster for one-way mirroring or on two Ceph storage clusters for two-way mirroring that participate in the mirroring relationship. For RBD mirroring to work, either using one-way or two-way replication, a couple of assumptions are made: A pool with the same name exists on both storage clusters. A pool contains journal-enabled images you want to mirror. Important In one-way or two-way replication, each instance of rbd-mirror must be able to connect to the other Ceph storage cluster simultaneously. Additionally, the network must have sufficient bandwidth between the two data center sites to handle mirroring. One-way Replication One-way mirroring implies that a primary image or pool of images in one storage cluster gets replicated to a secondary storage cluster. One-way mirroring also supports replicating to multiple secondary storage clusters. On the secondary storage cluster, the image is the non-primary replicate; that is, Ceph clients cannot write to the image. When data is mirrored from a primary storage cluster to a secondary storage cluster, the rbd-mirror runs ONLY on the secondary storage cluster. For one-way mirroring to work, a couple of assumptions are made: You have two Ceph storage clusters and you want to replicate images from a primary storage cluster to a secondary storage cluster. The secondary storage cluster has a Ceph client node attached to it running the rbd-mirror daemon. The rbd-mirror daemon will connect to the primary storage cluster to sync images to the secondary storage cluster. Figure 6.1. One-way mirroring Two-way Replication Two-way replication adds an rbd-mirror daemon on the primary cluster so images can be demoted on it and promoted on the secondary cluster. Changes can then be made to the images on the secondary cluster and they will be replicated in the reverse direction, from secondary to primary. Both clusters must have rbd-mirror running to allow promoting and demoting images on either cluster. Currently, two-way replication is only supported between two sites. For two-way mirroring to work, a couple of assumptions are made: You have two storage clusters and you want to be able to replicate images between them in either direction. Both storage clusters have a client node attached to them running the rbd-mirror daemon. The rbd-mirror daemon running on the secondary storage cluster will connect to the primary storage cluster to synchronize images to secondary, and the rbd-mirror daemon running on the primary storage cluster will connect to the secondary storage cluster to synchronize images to primary. Figure 6.2. Two-way mirroring Mirroring Modes Mirroring is configured on a per-pool basis with mirror peering storage clusters. Ceph supports two mirroring modes, depending on the type of images in the pool. Pool Mode All images in a pool with the journaling feature enabled are mirrored. Image Mode Only a specific subset of images within a pool are mirrored. You must enable mirroring for each image separately. Image States Whether or not an image can be modified depends on its state: Images in the primary state can be modified. Images in the non-primary state cannot be modified. Images are automatically promoted to primary when mirroring is first enabled on an image. The promotion can happen: Implicitly by enabling mirroring in pool mode. Explicitly by enabling mirroring of a specific image. It is possible to demote primary images and promote non-primary images. Additional Resources See the Enabling mirroring on a pool section of the Red Hat Ceph Storage Block Device Guide for more details. See the Enabling image mirroring section of the Red Hat Ceph Storage Block Device Guide for more details. See the Image promotion and demotion section of the Red Hat Ceph Storage Block Device Guide for more details. 6.1.1. An overview of journal-based and snapshot-based mirroring RADOS Block Device (RBD) images can be asynchronously mirrored between two Red Hat Ceph Storage clusters through two modes: Journal-based mirroring This mode uses the RBD journaling image feature to ensure point-in-time and crash consistent replication between two Red Hat Ceph Storage clusters. The actual image is not modified until every write to the RBD image is first recorded to the associated journal. The remote cluster reads from this journal and replays the updates to its local copy of the image. Because each write to the RBD images results in two writes to the Ceph cluster, write latencies nearly double with the usage of the RBD journaling image feature. Snapshot-based mirroring This mode uses periodic scheduled or manually created RBD image mirror snapshots to replicate crash consistent RBD images between two Red Hat Ceph Storage clusters. The remote cluster determines any data or metadata updates between two mirror snapshots and copies the deltas to its local copy of the image. The RBD fast-diff image feature enables the quick determination of updated data blocks without the need to scan the full RBD image. The complete delta between two snapshots needs to be synchronized prior to use during a failover scenario. Any partially applied set of deltas are rolled back at moment of failover. 6.2. Configuring one-way mirroring using the command-line interface This procedure configures one-way replication of a pool from the primary storage cluster to a secondary storage cluster. Note When using one-way replication you can mirror to multiple secondary storage clusters. Note Examples in this section will distinguish between two storage clusters by referring to the primary storage cluster with the primary images as site-a , and the secondary storage cluster you are replicating the images to, as site-b . The pool name used in these examples is called data . Prerequisites A minimum of two healthy and running Red Hat Ceph Storage clusters. Root-level access to a Ceph client node for each storage cluster. A CephX user with administrator-level capabilities. Procedure Log into the cephadm shell on both the sites: Example On site-b , schedule the deployment of mirror daemon on the secondary cluster: Syntax Example Note The nodename is the host where you want to configure mirroring in the secondary cluster. Enable journaling features on an image on site-a . For new images , use the --image-feature option: Syntax Example Note If exclusive-lock is already enabled, use journaling as the only argument, else it returns the following error: For existing images , use the rbd feature enable command: Syntax Example To enable journaling on all new images by default, set the configuration parameter using ceph config set command: Example Choose the mirroring mode, either pool or image mode, on both the storage clusters. Enabling pool mode : Syntax Example This example enables mirroring of the whole pool named data . Enabling image mode : Syntax Example This example enables image mode mirroring on the pool named data . Note To enable mirroring on specific images in a pool, see the Enabling image mirroring section in the Red Hat Ceph Storage Block Device Guide for more details. Verify that mirroring has been successfully enabled at both the sites: Syntax Example On a Ceph client node, bootstrap the storage cluster peers. Create Ceph user accounts, and register the storage cluster peer to the pool: Syntax Example Note This example bootstrap command creates the client.rbd-mirror.site-a and the client.rbd-mirror-peer Ceph users. Copy the bootstrap token file to the site-b storage cluster. Import the bootstrap token on the site-b storage cluster: Syntax Example Note For one-way RBD mirroring, you must use the --direction rx-only argument, as two-way mirroring is the default when bootstrapping peers. To verify the mirroring status, run the following command from a Ceph Monitor node on the primary and secondary sites: Syntax Example Here, up means the rbd-mirror daemon is running, and stopped means this image is not the target for replication from another storage cluster. This is because the image is primary on this storage cluster. Example Additional Resources See the Ceph block device mirroring section in the Red Hat Ceph Storage Block Device Guide for more details. See the User Management section in the Red Hat Ceph Storage Administration Guide for more details on Ceph users. 6.3. Configuring two-way mirroring using the command-line interface This procedure configures two-way replication of a pool between the primary storage cluster, and a secondary storage cluster. Note When using two-way replication you can only mirror between two storage clusters. Note Examples in this section will distinguish between two storage clusters by referring to the primary storage cluster with the primary images as site-a , and the secondary storage cluster you are replicating the images to, as site-b . The pool name used in these examples is called data . Prerequisites A minimum of two healthy and running Red Hat Ceph Storage clusters. Root-level access to a Ceph client node for each storage cluster. A CephX user with administrator-level capabilities. Procedure Log into the cephadm shell on both the sites: Example On the site-a primary cluster, run the following command: Example Note The nodename is the host where you want to configure mirroring. On site-b , schedule the deployment of mirror daemon on the secondary cluster: Syntax Example Note The nodename is the host where you want to configure mirroring in the secondary cluster. Enable journaling features on an image on site-a . For new images , use the --image-feature option: Syntax Example Note If exclusive-lock is already enabled, use journaling as the only argument, else it returns the following error: For existing images , use the rbd feature enable command: Syntax Example To enable journaling on all new images by default, set the configuration parameter using ceph config set command: Example Choose the mirroring mode, either pool or image mode, on both the storage clusters. Enabling pool mode : Syntax Example This example enables mirroring of the whole pool named data . Enabling image mode : Syntax Example This example enables image mode mirroring on the pool named data . Note To enable mirroring on specific images in a pool, see the Enabling image mirroring section in the Red Hat Ceph Storage Block Device Guide for more details. Verify that mirroring has been successfully enabled at both the sites: Syntax Example On a Ceph client node, bootstrap the storage cluster peers. Create Ceph user accounts, and register the storage cluster peer to the pool: Syntax Example Note This example bootstrap command creates the client.rbd-mirror.site-a and the client.rbd-mirror-peer Ceph users. Copy the bootstrap token file to the site-b storage cluster. Import the bootstrap token on the site-b storage cluster: Syntax Example Note The --direction argument is optional, as two-way mirroring is the default when bootstrapping peers. To verify the mirroring status, run the following command from a Ceph Monitor node on the primary and secondary sites: Syntax Example Here, up means the rbd-mirror daemon is running, and stopped means this image is not the target for replication from another storage cluster. This is because the image is primary on this storage cluster. Example If images are in the state up+replaying , then mirroring is functioning properly. Here, up means the rbd-mirror daemon is running, and replaying means this image is the target for replication from another storage cluster. Note Depending on the connection between the sites, mirroring can take a long time to sync the images. Additional Resources See the Ceph block device mirroring section in the Red Hat Ceph Storage Block Device Guide for more details. See the User Management section in the Red Hat Ceph Storage Administration Guide for more details on Ceph users. 6.4. Administration for mirroring Ceph block devices As a storage administrator, you can do various tasks to help you manage the Ceph block device mirroring environment. You can do the following tasks: Viewing information about storage cluster peers. Add or remove a storage cluster peer. Getting mirroring status for a pool or image. Enabling mirroring on a pool or image. Disabling mirroring on a pool or image. Delaying block device replication. Promoting and demoting an image. Prerequisites A minimum of two healthy running Red Hat Ceph Storage cluster. Root-level access to the Ceph client nodes. A one-way or two-way Ceph block device mirroring relationship. A CephX user with administrator-level capabilities. 6.4.1. Viewing information about peers View information about storage cluster peers. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To view information about the peers: Syntax Example 6.4.2. Enabling mirroring on a pool Enable mirroring on a pool by running the following commands on both peer clusters. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To enable mirroring on a pool: Syntax Example This example enables mirroring of the whole pool named data . Example This example enables image mode mirroring on the pool named data . Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.4.3. Disabling mirroring on a pool Before disabling mirroring, remove the peer clusters. Note When you disable mirroring on a pool, you also disable it on any images within the pool for which mirroring was enabled separately in image mode. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To disable mirroring on a pool: Syntax Example This example disables mirroring of a pool named data . 6.4.4. Enabling image mirroring Enable mirroring on the whole pool in image mode on both peer storage clusters. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Enable mirroring for a specific image within the pool: Syntax Example This example enables mirroring for the image2 image in the data pool. Additional Resources See the Enabling mirroring on a pool section in the Red Hat Ceph Storage Block Device Guide for details. 6.4.5. Disabling image mirroring You can disable Ceph Block Device mirroring on images. Prerequisites A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured. Root-level access to the node. Procedure To disable mirroring for a specific image: Syntax Example This example disables mirroring of the image2 image in the data pool. Additional Resources See the Configuring Ansible inventory location section in the Red Hat Ceph Storage Installation Guide for more details on adding clients to the cephadm-ansible inventory. 6.4.6. Image promotion and demotion You can promote or demote an image in a pool. Note Do not force promote non-primary images that are still syncing, because the images will not be valid after the promotion. Prerequisites A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured. Root-level access to the node. Procedure To demote an image to non-primary: Syntax Example This example demotes the image2 image in the data pool. To promote an image to primary: Syntax Example This example promotes image2 in the data pool. Depending on which type of mirroring you are using, see either Recover from a disaster with one-way mirroring or Recover from a disaster with two-way mirroring for details. Syntax Example Use forced promotion when the demotion cannot be propagated to the peer Ceph storage cluster. For example, because of cluster failure or communication outage. Additional Resources See the Failover after a non-orderly shutdown section in the Red Hat Ceph Storage Block Device Guide for details. 6.4.7. Image resynchronization You can resynchronize an image to restore consistent state. The rbd-mirror daemon does not attempt to mirror an image if there is inconsistent state between peer clusters. Resynchronization recreates an image using a complete copy of the primary image from the cluster. Warning Resynchronization removes preexisting mirror-snapshot schedules for an image. Prerequisites A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured. Root-level access to the node. Procedure To request a resynchronization to the primary image: Syntax Example This example requests resynchronization of image2 in the data pool. Additional Resources To recover from an inconsistent state because of a disaster, see either Recover from a disaster with one-way mirroring or Recover from a disaster with two-way mirroring for details. 6.4.8. Adding a storage cluster peer Add a storage cluster peer for the rbd-mirror daemon to discover its peer storage cluster. For example, to add the site-a storage cluster as a peer to the site-b storage cluster, then follow this procedure from the client node in the site-b storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Register the peer to the pool: Syntax Example 6.4.9. Removing a storage cluster peer Remove a storage cluster peer by specifying the peer UUID. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the pool name and the peer Universally Unique Identifier (UUID). Syntax Example Tip To view the peer UUID, use the rbd mirror pool info command. 6.4.10. Getting mirroring status for a pool You can get the mirror status for a pool on the storage clusters. Prerequisites A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured. Root-level access to the node. Procedure To get the mirroring pool summary: Syntax Example Tip To output status details for every mirroring image in a pool, use the --verbose option. 6.4.11. Getting mirroring status for a single image You can get the mirror status for an image by running the mirror image status command. Prerequisites A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured. Root-level access to the node. Procedure To get the status of a mirrored image: Syntax Example This example gets the status of the image2 image in the data pool. 6.4.12. Delaying block device replication Whether you are using one- or two-way replication, you can delay replication between RADOS Block Device (RBD) mirroring images. You might want to implement delayed replication if you want a window of cushion time in case an unwanted change to the primary image needs to be reverted before being replicated to the secondary image. Note Delaying block device replication is only applicable with journal-based mirroring. To implement delayed replication, the rbd-mirror daemon within the destination storage cluster should set the rbd_mirroring_replay_delay = MINIMUM_DELAY_IN_SECONDS configuration option. This setting can either be applied globally within the ceph.conf file utilized by the rbd-mirror daemons, or on an individual image basis. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To utilize delayed replication for a specific image, on the primary image, run the following rbd CLI command: Syntax Example This example sets a 10 minute minimum replication delay on image vm-1 in the vms pool. 6.4.13. Converting journal-based mirroring to snapshot-based mirrorring You can convert journal-based mirroring to snapshot-based mirroring by disabling mirroring and enabling snapshot. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example Disable mirroring for a specific image within the pool: Syntax Example Enable snapshot-based mirroring for the image: Syntax Example This example enables snapshot-based mirroring for the mirror_image image in the mirror_pool pool. 6.4.14. Creating an image mirror-snapshot Create an image mirror-snapshot when it is required to mirror the changed contents of an RBD image when using snapshot-based mirroring. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Important By default, a maximum of 5 image mirror-snapshots are retained. The most recent image mirror-snapshot is automatically removed if the limit is reached. If required, the limit can be overridden through the rbd_mirroring_max_mirroring_snapshots configuration. Image mirror-snapshots are automatically deleted when the image is removed or when mirroring is disabled. Procedure To create an image-mirror snapshot: Syntax Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.4.15. Scheduling mirror-snapshots Mirror-snapshots can be automatically created when mirror-snapshot schedules are defined. The mirror-snapshot can be scheduled globally, per-pool or per-image levels. Multiple mirror-snapshot schedules can be defined at any level but only the most specific snapshot schedules that match an individual mirrored image will run. 6.4.15.1. Creating a mirror-snapshot schedule You can create a mirror-snapshot schedule using the snapshot schedule command. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where the mirror image snapshot needs to be scheduled. Procedure To create a mirror-snapshot schedule: Syntax The CLUSTER_NAME should be used only when the cluster name is different from the default name ceph . The interval can be specified in days, hours, or minutes using d, h, or m suffix respectively. The optional START_TIME can be specified using the ISO 8601 time format. Example Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.4.15.2. Listing all snapshot schedules at a specific level You can list all snapshot schedules at a specific level. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where the mirror image snapshot needs to be scheduled. Procedure To list all snapshot schedules for a specific global, pool or image level, with an optional pool or image name: Syntax Additionally, the --recursive option can be specified to list all schedules at the specified level as shown below: Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.4.15.3. Removing a mirror-snapshot schedule You can remove a mirror-snapshot schedule using the snapshot schedule remove command. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where the mirror image snapshot needs to be scheduled. Procedure To remove a mirror-snapshot schedule: Syntax The interval can be specified in days, hours, or minutes using d, h, m suffix respectively. The optional START_TIME can be specified using the ISO 8601 time format. Example Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.4.15.4. Viewing the status for the snapshots to be created You can view the status for the snapshots to be created for snapshot-based mirroring RBD images. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where the mirror image snapshot needs to be scheduled. Procedure To view the status for the snapshots to be created: Syntax Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.5. Recover from a disaster As a storage administrator, you can be prepared for eventual hardware failure by knowing how to recover the data from another storage cluster where mirroring was configured. In the examples, the primary storage cluster is known as the site-a , and the secondary storage cluster is known as the site-b . Additionally, the storage clusters both have a data pool with two images, image1 and image2 . Prerequisites A running Red Hat Ceph Storage cluster. One-way or two-way mirroring was configured. 6.5.1. Disaster recovery Asynchronous replication of block data between two or more Red Hat Ceph Storage clusters reduces downtime and prevents data loss in the event of a significant data center failure. These failures have a widespread impact, also referred as a large blast radius , and can be caused by impacts to the power grid and natural disasters. Customer data needs to be protected during these scenarios. Volumes must be replicated with consistency and efficiency and also within Recovery Point Objective (RPO) and Recovery Time Objective (RTO) targets. This solution is called a Wide Area Network- Disaster Recovery (WAN-DR). In such scenarios it is hard to restore the primary system and the data center. The quickest way to recover is to failover the applications to an alternate Red Hat Ceph Storage cluster (disaster recovery site) and make the cluster operational with the latest copy of the data available. The solutions that are used to recover from these failure scenarios are guided by the application: Recovery Point Objective (RPO) : The amount of data loss, an application tolerate in the worst case. Recovery Time Objective (RTO) : The time taken to get the application back on line with the latest copy of the data available. Additional Resources See the Mirroring Ceph block devices Chapter in the Red Hat Ceph Storage Block Device Guide for details. See the Encryption in transit section in the Red Hat Ceph Storage Data Security and Hardening Guide to know more about data transmission over the wire in an encrypted state. 6.5.2. Recover from a disaster with one-way mirroring To recover from a disaster when using one-way mirroring use the following procedures. They show how to fail over to the secondary cluster after the primary cluster terminates, and how to fail back. The shutdown can be orderly or non-orderly. Important One-way mirroring supports multiple secondary sites. If you are using additional secondary clusters, choose one of the secondary clusters to fail over to. Synchronize from the same cluster during fail back. 6.5.3. Recover from a disaster with two-way mirroring To recover from a disaster when using two-way mirroring use the following procedures. They show how to fail over to the mirrored data on the secondary cluster after the primary cluster terminates, and how to failback. The shutdown can be orderly or non-orderly. 6.5.4. Failover after an orderly shutdown Failover to the secondary storage cluster after an orderly shutdown. Prerequisites Minimum of two running Red Hat Ceph Storage clusters. Root-level access to the node. Pool mirroring or image mirroring configured with one-way mirroring. Procedure Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image. Demote the primary images located on the site-a cluster by running the following commands on a monitor node in the site-a cluster: Syntax Example Promote the non-primary images located on the site-b cluster by running the following commands on a monitor node in the site-b cluster: Syntax Example After some time, check the status of the images from a monitor node in the site-b cluster. They should show a state of up+stopped and be listed as primary: Resume the access to the images. This step depends on which clients use the image. Additional Resources See the Block Storage and Volumes chapter in the Red Hat OpenStack Platform Storage Guide . 6.5.5. Failover after a non-orderly shutdown Failover to secondary storage cluster after a non-orderly shutdown. Prerequisites Minimum of two running Red Hat Ceph Storage clusters. Root-level access to the node. Pool mirroring or image mirroring configured with one-way mirroring. Procedure Verify that the primary storage cluster is down. Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image. Promote the non-primary images from a Ceph Monitor node in the site-b storage cluster. Use the --force option, because the demotion cannot be propagated to the site-a storage cluster: Syntax Example Check the status of the images from a Ceph Monitor node in the site-b storage cluster. They should show a state of up+stopping_replay . The description should say force promoted , meaning it is in the intermittent state. Wait until the state comes to up+stopped to validate the site is successfully promoted. Example Additional Resources See the Block Storage and Volumes chapter in the Red Hat OpenStack Platform Storage Guide . 6.5.6. Prepare for fail back If two storage clusters were originally configured only for one-way mirroring, in order to fail back, configure the primary storage cluster for mirroring in order to replicate the images in the opposite direction. During failback scenario, the existing peer that is inaccessible must be removed before adding a new peer to an existing cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure Log into the Cephadm shell: Example On the site-a storage cluster , run the following command: Example Remove any inaccessible peers. Important This step must be run on the peer site which is up and running. Note Multiple peers are supported only for one way mirroring. Get the pool UUID: Syntax Example Remove the inaccessible peer: Syntax Example Create a block device pool with a name same as its peer mirror pool. To create an rbd pool, execute the following: Syntax Example On a Ceph client node, bootstrap the storage cluster peers. Create Ceph user accounts, and register the storage cluster peer to the pool: Syntax Example Note This example bootstrap command creates the client.rbd-mirror.site-a and the client.rbd-mirror-peer Ceph users. Copy the bootstrap token file to the site-b storage cluster. Import the bootstrap token on the site-b storage cluster: Syntax Example Note For one-way RBD mirroring, you must use the --direction rx-only argument, as two-way mirroring is the default when bootstrapping peers. From a monitor node in the site-a storage cluster, verify the site-b storage cluster was successfully added as a peer: Example Additional Resources For detailed information, see the User Management chapter in the Red Hat Ceph Storage Administration Guide . 6.5.6.1. Fail back to the primary storage cluster When the formerly primary storage cluster recovers, fail back to the primary storage cluster. Note If you have scheduled snapshots at the image level, then you need to re-add the schedule as image resync operations changes the RBD Image ID and the schedule becomes obsolete. Prerequisites Minimum of two running Red Hat Ceph Storage clusters. Root-level access to the node. Pool mirroring or image mirroring configured with one-way mirroring. Procedure Check the status of the images from a monitor node in the site-b cluster again. They should show a state of up-stopped and the description should say local image is primary : Example From a Ceph Monitor node on the site-a storage cluster determine if the images are still primary: Syntax Example In the output from the commands, look for mirroring primary: true or mirroring primary: false , to determine the state. Demote any images that are listed as primary by running a command like the following from a Ceph Monitor node in the site-a storage cluster: Syntax Example If there was an unorderly shutdown, resynchronize the images. Images inconsistent with other nodes in the cluster are recreated using complete copies of primary images from the cluster. Run the following commands on a monitor node in the site-a storage cluster to resynchronize the images from site-b to site-a : Syntax Example After some time, ensure resynchronization of the images is complete by verifying they are in the up+replaying state. Check their state by running the following commands on a monitor node in the site-a storage cluster: Syntax Example Demote the images on the site-b storage cluster by running the following commands on a Ceph Monitor node in the site-b storage cluster: Syntax Example Note If there are multiple secondary storage clusters, this only needs to be done from the secondary storage cluster where it was promoted. Promote the formerly primary images located on the site-a storage cluster by running the following commands on a Ceph Monitor node in the site-a storage cluster: Syntax Example Check the status of the images from a Ceph Monitor node in the site-a storage cluster. They should show a status of up+stopped and the description should say local image is primary : Syntax Example 6.5.7. Remove two-way mirroring After fail back is complete, you can remove two-way mirroring and disable the Ceph block device mirroring service. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Remove the site-b storage cluster as a peer from the site-a storage cluster: Example Stop and disable the rbd-mirror daemon on the site-a client: Syntax Example
[ "cephadm shell cephadm shell", "ceph orch apply rbd-mirror --placement= NODENAME", "ceph orch apply rbd-mirror --placement=host04", "rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE FEATURE", "rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling", "one or more requested features are already enabled (22) Invalid argument", "rbd feature enable POOL_NAME / IMAGE_NAME FEATURE , FEATURE", "rbd feature enable data/image1 exclusive-lock, journaling", "ceph config set global rbd_default_features 125 ceph config show mon.host01 rbd_default_features", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data pool rbd mirror pool enable data pool", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data image rbd mirror pool enable data image", "rbd mirror pool info POOL_NAME", "rbd mirror pool info data Mode: pool Site Name: c13d8065-b33d-4cb5-b35f-127a02768e7f Peer Sites: none rbd mirror pool info data Mode: pool Site Name: a4c667e2-b635-47ad-b462-6faeeee78df7 Peer Sites: none", "rbd mirror pool peer bootstrap create --site-name PRIMARY_LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a", "rbd mirror pool peer bootstrap import --site-name SECONDARY_LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 image1: global_id: c13d8065-b33d-4cb5-b35f-127a02768e7f state: up+stopped description: remote image is non-primary service: host03.yuoosv on host03 last_update: 2021-10-06 09:13:58", "rbd mirror image status data/image1 image1: global_id: c13d8065-b33d-4cb5-b35f-127a02768e7f", "cephadm shell cephadm shell", "ceph orch apply rbd-mirror --placement=host01", "ceph orch apply rbd-mirror --placement= NODENAME", "ceph orch apply rbd-mirror --placement=host04", "rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE FEATURE", "rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling", "one or more requested features are already enabled (22) Invalid argument", "rbd feature enable POOL_NAME / IMAGE_NAME FEATURE , FEATURE", "rbd feature enable data/image1 exclusive-lock, journaling", "ceph config set global rbd_default_features 125 ceph config show mon.host01 rbd_default_features", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data pool rbd mirror pool enable data pool", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data image rbd mirror pool enable data image", "rbd mirror pool info POOL_NAME", "rbd mirror pool info data Mode: pool Site Name: c13d8065-b33d-4cb5-b35f-127a02768e7f Peer Sites: none rbd mirror pool info data Mode: pool Site Name: a4c667e2-b635-47ad-b462-6faeeee78df7 Peer Sites: none", "rbd mirror pool peer bootstrap create --site-name PRIMARY_LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a", "rbd mirror pool peer bootstrap import --site-name SECONDARY_LOCAL_SITE_NAME --direction rx-tx POOL_NAME PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap import --site-name site-b --direction rx-tx data /root/bootstrap_token_site-a", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 image1: global_id: a4c667e2-b635-47ad-b462-6faeeee78df7 state: up+stopped description: local image is primary service: host03.glsdbv on host03.ceph.redhat.com last_update: 2021-09-16 10:55:58 peer_sites: name: a state: up+stopped description: replaying, {\"bytes_per_second\":0.0,\"entries_behind_primary\":0,\"entries_per_second\":0.0,\"non_primary_position\":{\"entry_tid\":3,\"object_number\":3,\"tag_tid\":1},\"primary_position\":{\"entry_tid\":3,\"object_number\":3,\"tag_tid\":1}} last_update: 2021-09-16 10:55:50", "rbd mirror image status data/image1 image1: global_id: a4c667e2-b635-47ad-b462-6faeeee78df7 state: up+replaying description: replaying, {\"bytes_per_second\":0.0,\"entries_behind_primary\":0,\"entries_per_second\":0.0,\"non_primary_position\":{\"entry_tid\":3,\"object_number\":3,\"tag_tid\":1},\"primary_position\":{\"entry_tid\":3,\"object_number\":3,\"tag_tid\":1}} service: host05.dtisty on host05 last_update: 2021-09-16 10:57:20 peer_sites: name: b state: up+stopped description: local image is primary last_update: 2021-09-16 10:57:28", "rbd mirror pool info POOL_NAME", "rbd mirror pool info data Mode: pool Site Name: a Peer Sites: UUID: 950ddadf-f995-47b7-9416-b9bb233f66e3 Name: b Mirror UUID: 4696cd9d-1466-4f98-a97a-3748b6b722b3 Direction: rx-tx Client: client.rbd-mirror-peer", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data pool", "rbd mirror pool enable data image", "rbd mirror pool disable POOL_NAME", "rbd mirror pool disable data", "rbd mirror image enable POOL_NAME / IMAGE_NAME", "rbd mirror image enable data/image2", "rbd mirror image disable POOL_NAME / IMAGE_NAME", "rbd mirror image disable data/image2", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image2", "rbd mirror image promote POOL_NAME / IMAGE_NAME", "rbd mirror image promote data/image2", "rbd mirror image promote --force POOL_NAME / IMAGE_NAME", "rbd mirror image promote --force data/image2", "rbd mirror image resync POOL_NAME / IMAGE_NAME", "rbd mirror image resync data/image2", "rbd --cluster CLUSTER_NAME mirror pool peer add POOL_NAME PEER_CLIENT_NAME @ PEER_CLUSTER_NAME -n CLIENT_NAME", "rbd --cluster site-b mirror pool peer add data client.site-a@site-a -n client.site-b", "rbd mirror pool peer remove POOL_NAME PEER_UUID", "rbd mirror pool peer remove data 7e90b4ce-e36d-4f07-8cbc-42050896825d", "rbd mirror pool status POOL_NAME", "rbd mirror pool status data health: OK daemon health: OK image health: OK images: 1 total 1 replaying", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image2 image2: global_id: 1e3422a2-433e-4316-9e43-1827f8dbe0ef state: up+unknown description: remote image is non-primary service: pluto008.yuoosv on pluto008 last_update: 2021-10-06 09:37:58", "rbd image-meta set POOL_NAME / IMAGE_NAME conf_rbd_mirroring_replay_delay MINIMUM_DELAY_IN_SECONDS", "rbd image-meta set vms/vm-1 conf_rbd_mirroring_replay_delay 600", "cephadm shell", "rbd mirror image disable POOL_NAME / IMAGE_NAME", "rbd mirror image disable mirror_pool/mirror_image Mirroring disabled", "rbd mirror image enable POOL_NAME / IMAGE_NAME snapshot", "rbd mirror image enable mirror_pool/mirror_image snapshot Mirroring enabled", "rbd --cluster CLUSTER_NAME mirror image snapshot POOL_NAME / IMAGE_NAME", "rbd mirror image snapshot data/image1", "rbd --cluster CLUSTER_NAME mirror snapshot schedule add --pool POOL_NAME --image IMAGE_NAME INTERVAL [ START_TIME ]", "rbd mirror snapshot schedule add --pool data --image image1 6h", "rbd mirror snapshot schedule add --pool data --image image1 24h 14:00:00-05:00", "rbd --cluster site-a mirror snapshot schedule ls --pool POOL_NAME --recursive", "rbd mirror snapshot schedule ls --pool data --recursive POOL NAMESPACE IMAGE SCHEDULE data - - every 1d starting at 14:00:00-05:00 data - image1 every 6h", "rbd --cluster CLUSTER_NAME mirror snapshot schedule remove --pool POOL_NAME --image IMAGE_NAME INTERVAL START_TIME", "rbd mirror snapshot schedule remove --pool data --image image1 6h", "rbd mirror snapshot schedule remove --pool data --image image1 24h 14:00:00-05:00", "rbd --cluster site-a mirror snapshot schedule status [--pool POOL_NAME ] [--image IMAGE_NAME ]", "rbd mirror snapshot schedule status SCHEDULE TIME IMAGE 2021-09-21 18:00:00 data/image1", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image1 rbd mirror image demote data/image2", "rbd mirror image promote POOL_NAME / IMAGE_NAME", "rbd mirror image promote data/image1 rbd mirror image promote data/image2", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-17 16:04:37 rbd mirror image status data/image2 image2: global_id: 596f41bc-874b-4cd4-aefe-4929578cc834 state: up+stopped description: local image is primary last_update: 2019-04-17 16:04:37", "rbd mirror image promote --force POOL_NAME / IMAGE_NAME", "rbd mirror image promote --force data/image1 rbd mirror image promote --force data/image2", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopping_replay description: force promoted last_update: 2023-04-17 13:25:06 rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: force promoted last_update: 2023-04-17 13:25:06", "cephadm shell", "ceph orch apply rbd-mirror --placement=host01", "rbd mirror pool info POOL_NAME", "rbd mirror pool info pool_failback", "rbd mirror pool peer remove POOL_NAME PEER_UUID", "rbd mirror pool peer remove pool_failback f055bb88-6253-4041-923d-08c4ecbe799a", "ceph osd pool create POOL_NAME PG_NUM ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME", "ceph osd pool create pool1 ceph osd pool application enable pool1 rbd rbd pool init -p pool1", "rbd mirror pool peer bootstrap create --site-name LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a", "rbd mirror pool peer bootstrap import --site-name LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a", "rbd mirror pool info -p data Mode: image Peers: UUID NAME CLIENT d2ae0594-a43b-4c67-a167-a36c646e8643 site-b client.site-b", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-22 17:37:48 rbd mirror image status data/image2 image2: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-22 17:38:18", "rbd mirror pool info POOL_NAME / IMAGE_NAME", "rbd info data/image1 rbd info data/image2", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image1", "rbd mirror image resync POOL_NAME / IMAGE_NAME", "rbd mirror image resync data/image1 Flagged image for resync from primary rbd mirror image resync data/image2 Flagged image for resync from primary", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 rbd mirror image status data/image2", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image1 rbd mirror image demote data/image2", "rbd mirror image promote POOL_NAME / IMAGE_NAME", "rbd mirror image promote data/image1 rbd mirror image promote data/image2", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-22 11:14:51 rbd mirror image status data/image2 image2: global_id: 596f41bc-874b-4cd4-aefe-4929578cc834 state: up+stopped description: local image is primary last_update: 2019-04-22 11:14:51", "rbd mirror pool peer remove data client.remote@remote --cluster local rbd --cluster site-a mirror pool peer remove data client.site-b@site-b -n client.site-a", "systemctl stop ceph-rbd-mirror@ CLIENT_ID systemctl disable ceph-rbd-mirror@ CLIENT_ID systemctl disable ceph-rbd-mirror.target", "systemctl stop ceph-rbd-mirror@site-a systemctl disable ceph-rbd-mirror@site-a systemctl disable ceph-rbd-mirror.target" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/block_device_guide/mirroring-ceph-block-devices
Chapter 3. System requirements for tuning
Chapter 3. System requirements for tuning You can find the hardware and software requirements in Preparing your Environment for Installation in Installing Satellite Server in a connected network environment .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/tuning_performance_of_red_hat_satellite/system_requirements_for_tuning_performance-tuning
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/patching_and_upgrading_guide/making-open-source-more-inclusive
A.3. The VDSM Hook Environment
A.3. The VDSM Hook Environment Most hook scripts are run as the vdsm user and inherit the environment of the VDSM process. The exceptions are hook scripts triggered by the before_vdsm_start and after_vdsm_stop events. Hook scripts triggered by these events run as the root user and do not inherit the environment of the VDSM process.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/vdsm_hooks_environment
Chapter 11. Securing builds by strategy
Chapter 11. Securing builds by strategy Builds in OpenShift Container Platform are run in privileged containers. Depending on the build strategy used, if you have privileges, you can run builds to escalate their permissions on the cluster and host nodes. And as a security measure, it limits who can run builds and the strategy that is used for those builds. Custom builds are inherently less safe than source builds, because they can execute any code within a privileged container, and are disabled by default. Grant docker build permissions with caution, because a vulnerability in the Dockerfile processing logic could result in a privileges being granted on the host node. By default, all users that can create builds are granted permission to use the docker and Source-to-image (S2I) build strategies. Users with cluster administrator privileges can enable the custom build strategy, as referenced in the restricting build strategies to a user globally section. You can control who can build and which build strategies they can use by using an authorization policy. Each build strategy has a corresponding build subresource. A user must have permission to create a build and permission to create on the build strategy subresource to create builds using that strategy. Default roles are provided that grant the create permission on the build strategy subresource. Table 11.1. Build Strategy Subresources and Roles Strategy Subresource Role Docker builds/docker system:build-strategy-docker Source-to-Image builds/source system:build-strategy-source Custom builds/custom system:build-strategy-custom JenkinsPipeline builds/jenkinspipeline system:build-strategy-jenkinspipeline 11.1. Disabling access to a build strategy globally To prevent access to a particular build strategy globally, log in as a user with cluster administrator privileges, remove the corresponding role from the system:authenticated group, and apply the annotation rbac.authorization.kubernetes.io/autoupdate: "false" to protect them from changes between the API restarts. The following example shows disabling the docker build strategy. Procedure Apply the rbac.authorization.kubernetes.io/autoupdate annotation by entering the following command: USD oc annotate clusterrolebinding.rbac system:build-strategy-docker-binding 'rbac.authorization.kubernetes.io/autoupdate=false' --overwrite Remove the role by entering the following command: USD oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated Ensure the build strategy subresources are also removed from the admin and edit user roles: USD oc get clusterrole admin -o yaml | grep "builds/docker" USD oc get clusterrole edit -o yaml | grep "builds/docker" 11.2. Restricting build strategies to users globally You can allow a set of specific users to create builds with a particular strategy. Prerequisites Disable global access to the build strategy. Procedure Assign the role that corresponds to the build strategy to a specific user. For example, to add the system:build-strategy-docker cluster role to the user devuser : USD oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser Warning Granting a user access at the cluster level to the builds/docker subresource means that the user can create builds with the docker strategy in any project in which they can create builds. 11.3. Restricting build strategies to a user within a project Similar to granting the build strategy role to a user globally, you can allow a set of specific users within a project to create builds with a particular strategy. Procedure Assign the role that corresponds to the build strategy to a specific user within a project. For example, to add the system:build-strategy-docker role within the project devproject to the user devuser : USD oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject
[ "oc annotate clusterrolebinding.rbac system:build-strategy-docker-binding 'rbac.authorization.kubernetes.io/autoupdate=false' --overwrite", "oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated", "oc get clusterrole admin -o yaml | grep \"builds/docker\"", "oc get clusterrole edit -o yaml | grep \"builds/docker\"", "oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser", "oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/builds_using_buildconfig/securing-builds-by-strategy
Appendix A. Fuse Integration Perspective
Appendix A. Fuse Integration Perspective Use the Fuse Integration perspective to design, monitor, test, and publish your integration application. You can open the Fuse Integration perspective in the following ways: When you create a new Fuse Integration project (see Chapter 1, Creating a New Fuse Integration Project ), the tooling switches to the Fuse Integration perspective. Click on the tool bar. If the icon is not available on the tool bar, click and then select Fuse Integration from the list of available perspectives. Select Window Perspective Open Perspective Fuse Integration . The Fuse Integration perspective consists of nine main areas: Project Explorer view Displays all projects known to the tooling. You can view all artifacts that make up each project. The Project Explorer view also displays all routing context .xml . files for a project under its Camel Contexts node. This enables you to find and open a routing context file included in a project. Under each routing context .xml file, the Project Explorer view displays all routes defined within the context. For multiroute contexts, this lets you focus on a specific route on the canvas. The route editor Provides the main design-time tooling and consists of three tabs: Design - Displays a large grid area on which routes are constructed and a palette from which Enterprise Integration Patterns (EIPs) and Camel components are selected and then connected on the canvas to form routes. The canvas is the route editor's workbench and where you do most of your work. It displays a graphical representation of one or more routes, which are made up of connected EIPs and Camel components (called nodes once they are placed on the canvas). Selecting a node on the canvas populates the Properties view with the properties that apply to the selected node, so you can edit them. The Palette contains all of the patterns and Camel components needed to construct a route and groups them according to function - Components , Routing , Control Flow , Transformation , and Miscellaneous . Source Displays the contents of the .xml file for the routes constructed on the route editor's canvas. You can edit the routing context in the Source tab as well as in the Design tab. The Source tab is useful for editing and adding any configuration, comments, or beans to the routing context file. The content assist feature helps you when working with configuration files. In the Source tab, press Ctrl + Space to see a list of possible values that can be inserted into your project. Configurations Provides an easy way to add shared configuration (global endpoints, data formats, beans) to a multi-route routing context. For details see Section 2.6, "Adding global endpoints, data formats, or beans" . REST Provides a graphical representation of Rest DSL components. Properties view Displays the properties of the node selected on the canvas. JMX Navigator view Lists the JMX servers and the infrastructure they monitor. It enables you to browse JMX servers and the pocesses they are monitoring. It also identifies instances of Red Hat processes. The JMX Navigator view drives all monitoring and testing activities in the Fuse Integration perspective. It determines which routes are displayed in the Diagram View , the Properties view, and the Messages View . It is also provides menu commands for activating route tracing, adding and deleting JMS destinations, and starting and suspending routes. It is also the target for dragging and dropping messages onto a route. By default, the JMX Navigator view shows all Java processes that are running on your local machine. You can add JMX servers as needed to view infrastructure on other machines. Diagram View Displays a graphical tree representing the node selected in the JMX Navigator view. When you select a process, server, endpoint, or other node, the Diagram View shows the selected node as the root with branches down to its children and grandchildren. When you select a broker, the Diagram View displays up to three children: connections, topics, and queues. It also shows configured connections and destinations as grandchildren. When you select a route, the Diagram View displays all nodes in the route and shows the different paths that messages can take through the route. It also displays timing metrics for each processing step in the route when route tracing is enabled. Messages View Lists the messages that have passed through the selected JMS destination or through Apache Camel endpoints when route tracing is enabled. When a JMS destination is selected in the JMX Navigator view, the view lists all messages that are at the destination. When route tracing is enabled, the Messages View lists all messages that passed through the nodes in the route since tracing started. You can configure the Messages View to display only the data in which you are interested and in your preferred sequence. When a message trace in the Messages View is selected, its details (message body and all message headers) appear in the Properties view. In the Diagram View , the step in the route associated with the selected message trace is highlighted. Servers view Displays a list of servers managed by the tooling. It displays their runtime status and provides controls for adding, starting and stopping them and for publishing projects to them. Terminal view Displays the command console of the connected container. You can control the container by entering commands in the Terminal view. Console view Displays the console output for recently executed actions.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/integrationperspective
Chapter 8. Ceph performance benchmark
Chapter 8. Ceph performance benchmark As a storage administrator, you can benchmark performance of the Red Hat Ceph Storage cluster. The purpose of this section is to give Ceph administrators a basic understanding of Ceph's native benchmarking tools. These tools will provide some insight into how the Ceph storage cluster is performing. This is not the definitive guide to Ceph performance benchmarking, nor is it a guide on how to tune Ceph accordingly. 8.1. Performance baseline The OSD, including the journal, disks and the network throughput should each have a performance baseline to compare against. You can identify potential tuning opportunities by comparing the baseline performance data with the data from Ceph's native tools. Red Hat Enterprise Linux has many built-in tools, along with a plethora of open source community tools, available to help accomplish these tasks. Additional Resources For more details about some of the available tools, see this Knowledgebase article . 8.2. Benchmarking Ceph performance Ceph includes the rados bench command to do performance benchmarking on a RADOS storage cluster. The command will execute a write test and two types of read tests. The --no-cleanup option is important to use when testing both read and write performance. By default the rados bench command will delete the objects it has written to the storage pool. Leaving behind these objects allows the two read tests to measure sequential and random read performance. Note Before running these performance tests, drop all the file system caches by running the following: Example Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Create a new storage pool: Example Execute a write test for 10 seconds to the newly created storage pool: Example Execute a sequential read test for 10 seconds to the storage pool: Example Execute a random read test for 10 seconds to the storage pool: Example To increase the number of concurrent reads and writes, use the -t option, which the default is 16 threads. Also, the -b parameter can adjust the size of the object being written. The default object size is 4 MB. A safe maximum object size is 16 MB. Red Hat recommends running multiple copies of these benchmark tests to different pools. Doing this shows the changes in performance from multiple clients. Add the --run-name LABEL option to control the names of the objects that get written during the benchmark test. Multiple rados bench commands might be ran simultaneously by changing the --run-name label for each running command instance. This prevents potential I/O errors that can occur when multiple clients are trying to access the same object and allows for different clients to access different objects. The --run-name option is also useful when trying to simulate a real world workload. Example Remove the data created by the rados bench command: Example 8.3. Benchmarking Ceph block performance Ceph includes the rbd bench-write command to test sequential writes to the block device measuring throughput and latency. The default byte size is 4096, the default number of I/O threads is 16, and the default total number of bytes to write is 1 GB. These defaults can be modified by the --io-size , --io-threads and --io-total options respectively. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Run the write performance test against the block device Example Additional Resources See the Ceph block devices chapter in the Red Hat Ceph Storage Block Device Guide for more information on the rbd command. 8.4. Benchmarking CephFS performance You can use the FIO tool to benchmark Ceph File System (CephFS) performance. This tool can also be used to benchmark Ceph Block Device. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. FIO tool installed on the nodes. See the KCS How to install the Flexible I/O Tester (fio) performance benchmarking tool for more details. Block Device or the Ceph File System mounted on the node. Procedure Navigate to the node or the application where the Block Device or the CephFS is mounted: Example Run FIO command. Start the bs value from 4k and repeat in power of 2 increments (4k, 8k, 16k, 32k ... 128k... 512k, 1m, 2m, 4m ) and with different iodepth settings. You should also run tests at your expected workload operation size. Example for 4K tests with different iodepth values Example for 8K tests with different iodepth values Note For more information on the usage of fio command, see the fio man page. 8.5. Benchmarking Ceph Object Gateway performance You can use the s3cmd tool to benchmark Ceph Object Gateway performance. Use get and put requests to determine the performance. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. s3cmd installed on the nodes. Procedure Upload a file and measure the speed. The time command measures the duration of upload. Syntax Example Replace /path-to-local-file with the file you want to upload and s3://bucket-name/remote/file with the destination in your S3 bucket. Download a file and measure the speed. The time command measures the duration of download. Syntax Example Replace s3://bucket-name/remote/file with the S3 object you want to download and /path-to-local-destination with the local directory where you want to save the file. List all the objects in the specified bucket and measure response time. Syntax Example Analyze the output to calculate upload/download speed and measure response time based on the duration reported by the time command.
[ "echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync", "ceph osd pool create testbench 100 100", "rados bench -p testbench 10 write --no-cleanup Maintaining 16 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects Object prefix: benchmark_data_cephn1.home.network_10510 sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 1 16 16 0 0 0 - 0 2 16 16 0 0 0 - 0 3 16 16 0 0 0 - 0 4 16 17 1 0.998879 1 3.19824 3.19824 5 16 18 2 1.59849 4 4.56163 3.87993 6 16 18 2 1.33222 0 - 3.87993 7 16 19 3 1.71239 2 6.90712 4.889 8 16 25 9 4.49551 24 7.75362 6.71216 9 16 25 9 3.99636 0 - 6.71216 10 16 27 11 4.39632 4 9.65085 7.18999 11 16 27 11 3.99685 0 - 7.18999 12 16 27 11 3.66397 0 - 7.18999 13 16 28 12 3.68975 1.33333 12.8124 7.65853 14 16 28 12 3.42617 0 - 7.65853 15 16 28 12 3.19785 0 - 7.65853 16 11 28 17 4.24726 6.66667 12.5302 9.27548 17 11 28 17 3.99751 0 - 9.27548 18 11 28 17 3.77546 0 - 9.27548 19 11 28 17 3.57683 0 - 9.27548 Total time run: 19.505620 Total writes made: 28 Write size: 4194304 Bandwidth (MB/sec): 5.742 Stddev Bandwidth: 5.4617 Max bandwidth (MB/sec): 24 Min bandwidth (MB/sec): 0 Average Latency: 10.4064 Stddev Latency: 3.80038 Max latency: 19.503 Min latency: 3.19824", "rados bench -p testbench 10 seq sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 Total time run: 0.804869 Total reads made: 28 Read size: 4194304 Bandwidth (MB/sec): 139.153 Average Latency: 0.420841 Max latency: 0.706133 Min latency: 0.0816332", "rados bench -p testbench 10 rand sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 1 16 46 30 119.801 120 0.440184 0.388125 2 16 81 65 129.408 140 0.577359 0.417461 3 16 120 104 138.175 156 0.597435 0.409318 4 15 157 142 141.485 152 0.683111 0.419964 5 16 206 190 151.553 192 0.310578 0.408343 6 16 253 237 157.608 188 0.0745175 0.387207 7 16 287 271 154.412 136 0.792774 0.39043 8 16 325 309 154.044 152 0.314254 0.39876 9 16 362 346 153.245 148 0.355576 0.406032 10 16 405 389 155.092 172 0.64734 0.398372 Total time run: 10.302229 Total reads made: 405 Read size: 4194304 Bandwidth (MB/sec): 157.248 Average Latency: 0.405976 Max latency: 1.00869 Min latency: 0.0378431", "rados bench -p testbench 10 write -t 4 --run-name client1 Maintaining 4 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects Object prefix: benchmark_data_node1_12631 sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 1 4 4 0 0 0 - 0 2 4 6 2 3.99099 4 1.94755 1.93361 3 4 8 4 5.32498 8 2.978 2.44034 4 4 8 4 3.99504 0 - 2.44034 5 4 10 6 4.79504 4 2.92419 2.4629 6 3 10 7 4.64471 4 3.02498 2.5432 7 4 12 8 4.55287 4 3.12204 2.61555 8 4 14 10 4.9821 8 2.55901 2.68396 9 4 16 12 5.31621 8 2.68769 2.68081 10 4 17 13 5.18488 4 2.11937 2.63763 11 4 17 13 4.71431 0 - 2.63763 12 4 18 14 4.65486 2 2.4836 2.62662 13 4 18 14 4.29757 0 - 2.62662 Total time run: 13.123548 Total writes made: 18 Write size: 4194304 Bandwidth (MB/sec): 5.486 Stddev Bandwidth: 3.0991 Max bandwidth (MB/sec): 8 Min bandwidth (MB/sec): 0 Average Latency: 2.91578 Stddev Latency: 0.956993 Max latency: 5.72685 Min latency: 1.91967", "rados -p testbench cleanup", "rbd bench --io-type write image01 --pool=testbench bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern seq SEC OPS OPS/SEC BYTES/SEC 2 11127 5479.59 22444382.79 3 11692 3901.91 15982220.33 4 12372 2953.34 12096895.42 5 12580 2300.05 9421008.60 6 13141 2101.80 8608975.15 7 13195 356.07 1458459.94 8 13820 390.35 1598876.60 9 14124 325.46 1333066.62 ..", "cd /mnt/ceph-block-device cd /mnt/ceph-file-system", "fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=4k --iodepth=32 --size=5G --runtime=60 --group_reporting=1", "fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=8k --iodepth=32 --size=5G --runtime=60 --group_reporting=1", "time s3cmd put PATH_OF_SOURCE_FILE PATH_OF_DESTINATION_FILE", "time s3cmd put /path-to-local-file s3://bucket-name/remote/file", "time s3cmd get PATH_OF_DESTINATION_FILE DESTINATION_PATH", "time s3cmd get s3://bucket-name/remote/file /path-to-local-destination", "time s3cmd ls s3:// BUCKET_NAME", "time s3cmd ls s3://bucket-name" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/administration_guide/ceph-performance-benchmark
Chapter 34. Jira Transition Issue Sink
Chapter 34. Jira Transition Issue Sink Sets a new status (transition to) of an existing issue in Jira. The Kamelet expects the following headers to be set: issueKey / ce-issueKey : as the issue unique code. issueTransitionId / ce-issueTransitionId : as the new status (transition) code. You should carefully check the project workflow as each transition may have conditions to check before the transition is made. The comment of the transition is set in the body of the message. 34.1. Configuration Options The following table summarizes the configuration options available for the jira-transition-issue-sink Kamelet: Property Name Description Type Default Example jiraUrl * Jira URL The URL of your instance of Jira string "http://my_jira.com:8081" password * Password The password or the API Token to access Jira string username * Username The username to access Jira string Note Fields marked with an asterisk (*) are mandatory. 34.2. Dependencies At runtime, the jira-transition-issue-sink Kamelet relies upon the presence of the following dependencies: camel:core camel:jackson camel:jira camel:kamelet mvn:com.fasterxml.jackson.datatype:jackson-datatype-joda:2.12.4.redhat-00001 34.3. Usage This section describes how you can use the jira-transition-issue-sink . 34.3.1. Knative Sink You can use the jira-transition-issue-sink Kamelet as a Knative sink by binding it to a Knative object. jira-transition-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTransitionId" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-162" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password" 34.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 34.3.1.2. Procedure for using the cluster CLI Save the jira-transition-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-transition-issue-sink-binding.yaml 34.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-transition-issue-sink-binding timer-source?message="The new comment 123"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 34.3.2. Kafka Sink You can use the jira-transition-issue-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jira-transition-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTransitionId" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-162" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-transition-issue-sink properties: jiraUrl: "jira server url" username: "username" password: "password" 34.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 34.3.2.2. Procedure for using the cluster CLI Save the jira-transition-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-transition-issue-sink-binding.yaml 34.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-transition-issue-sink-binding timer-source?message="The new comment 123"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 34.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jira-transition-issue-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTransitionId\" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-162\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"", "apply -f jira-transition-issue-sink-binding.yaml", "kamel bind --name jira-transition-issue-sink-binding timer-source?message=\"The new comment 123\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTransitionId\" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-162\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-transition-issue-sink properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"", "apply -f jira-transition-issue-sink-binding.yaml", "kamel bind --name jira-transition-issue-sink-binding timer-source?message=\"The new comment 123\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/jira-transition-issue-sink
Configuring and managing networking
Configuring and managing networking Red Hat Enterprise Linux 8 Managing network interfaces, firewalls, and advanced networking features Red Hat Customer Content Services
[ "NamePolicy=kernel database onboard slot path", "ip link show 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "udevadm info --query=property --property=ID_NET_NAMING_SCHEME /sys/class/net/eno1' ID_NET_NAMING_SCHEME=rhel-8.0", "grubby --update-kernel=ALL --args=net.naming-scheme= rhel-8.4", "reboot", "ip link show 2: eno1np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "nmcli -f device,name connection show DEVICE NAME eno1 example_profile", "nmcli connection modify example_profile connection.interface-name \"eno1np0\"", "nmcli connection up example_profile", "udevadm info --query=property --property=ID_NET_NAMING_SCHEME /sys/class/net/eno1np0' ID_NET_NAMING_SCHEME=_rhel-8.4", "ip link show 2: enP5165p0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000", "systool -c net -p Class = \"net\" Class Device = \"enP5165p0s0\" Class Device path = \"/sys/devices/pci142d:00/142d:00:00.0/net/enP5165p0s0\" Device = \"142d:00:00.0\" Device path = \"/sys/devices/pci142d:00/142d:00:00.0\"", "cat /sys/devices/pci142d:00/142d:00:00.0/uid_id_unique", "cat /sys/devices/pci142d:00/142d:00:00.0/uid", "cat /sys/devices/pci142d:00/142d:00:00.0/function_id", "printf \"%d\\n\" 0x00001402 5122", "ip link show 2: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "cat /sys/class/net/enp1s0/type 1", "SUBSYSTEM==\"net\",ACTION==\"add\",ATTR{address}==\" <MAC_address> \",ATTR{type}==\" <device_type_id> \",NAME=\" <new_interface_name> \"", "SUBSYSTEM==\"net\",ACTION==\"add\",ATTR{address}==\" 00:00:5e:00:53:1a \",ATTR{type}==\" 1 \",NAME=\" provider0 \"", "dracut -f", "nmcli -f device,name connection show DEVICE NAME enp1s0 example_profile", "nmcli connection modify example_profile connection.interface-name \"\"", "nmcli connection modify example_profile match.interface-name \"provider0 enp1s0\"", "reboot", "ip link show provider0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "nmcli connection modify example_profile match.interface-name \"provider0\"", "nmcli connection up example_profile", "ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "mkdir -p /etc/systemd/network/", "[Match] MACAddress= <MAC_address> [Link] Name= <new_interface_name>", "[Match] MACAddress=00:00:5e:00:53:1a [Link] Name=provider0", "dracut -f", "nmcli -f device,name connection show DEVICE NAME enp1s0 example_profile", "nmcli connection modify example_profile connection.interface-name \"\"", "nmcli connection modify example_profile match.interface-name \"provider0 enp1s0\"", "reboot", "ip link show provider0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "nmcli connection modify example_profile match.interface-name \"provider0\"", "nmcli connection up example_profile", "ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "mkdir -p /etc/systemd/network/", "cp /usr/lib/systemd/network/99-default.link /etc/systemd/network/98-lan.link", "[Match] MACAddress= <MAC_address> [Link] AlternativeName= <alternative_interface_name_1> AlternativeName= <alternative_interface_name_2> AlternativeName= <alternative_interface_name_n>", "[Match] MACAddress=00:00:5e:00:53:1a [Link] NamePolicy=kernel database onboard slot path AlternativeNamesPolicy=database onboard slot path MACAddressPolicy=persistent AlternativeName=provider", "dracut -f", "reboot", "ip address show provider 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff altname provider", "nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0", "nmcli connection add con-name <connection-name> ifname <device-name> type ethernet", "nmcli connection modify \"Wired connection 1\" connection.id \"Internal-LAN\"", "nmcli connection show Internal-LAN connection.interface-name: enp1s0 connection.autoconnect: yes ipv4.method: auto ipv6.method: auto", "nmcli connection modify Internal-LAN ipv4.method auto", "nmcli connection modify Internal-LAN ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200 ipv4.dns-search example.com", "nmcli connection modify Internal-LAN ipv6.method auto", "nmcli connection modify Internal-LAN ipv6.method manual ipv6.addresses 2001:db8:1::fffe/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::ffbb ipv6.dns-search example.com", "nmcli connection modify <connection-name> <setting> <value>", "nmcli connection up Internal-LAN", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0", "nmcli connection edit type ethernet con-name \" <connection-name> \"", "nmcli connection edit con-name \" <connection-name> \"", "nmcli> set connection.id Internal-LAN", "nmcli> print connection.interface-name: enp1s0 connection.autoconnect: yes ipv4.method: auto ipv6.method: auto", "nmcli> set connection.interface-name enp1s0", "nmcli> set ipv4.method auto", "nmcli> ipv4.addresses 192.0.2.1/24 Do you also want to set 'ipv4.method' to 'manual'? [yes]: yes nmcli> ipv4.gateway 192.0.2.254 nmcli> ipv4.dns 192.0.2.200 nmcli> ipv4.dns-search example.com", "nmcli> set ipv6.method auto", "nmcli> ipv6.addresses 2001:db8:1::fffe/64 Do you also want to set 'ipv6.method' to 'manual'? [yes]: yes nmcli> ipv6.gateway 2001:db8:1::fffe nmcli> ipv6.dns 2001:db8:1::ffbb nmcli> ipv6.dns-search example.com", "nmcli> save persistent", "nmcli> quit", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unavailable --", "nmtui", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "nm-connection-editor", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "--- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: enp1s0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: enp1s0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-ethernet-profile.yml", "nmstatectl show enp1s0", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "managed-node-01.example.com interface=enp1s0 ip_v4=192.0.2.1/24 ip_v6=2001:db8:1::1/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe managed-node-02.example.com interface=enp1s0 ip_v4=192.0.2.2/24 ip_v6=2001:db8:1::2/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe", "--- - name: Configure the network hosts: managed-node-01.example.com,managed-node-02.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: \"{{ interface }}\" interface_name: \"{{ interface }}\" type: ethernet autoconnect: yes ip: address: - \"{{ ip_v4 }}\" - \"{{ ip_v6 }}\" gateway4: \"{{ gateway_v4 }}\" gateway6: \"{{ gateway_v6 }}\" dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "--- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true auto-dns: true auto-gateway: true auto-routes: true dhcp: true ipv6: enabled: true auto-dns: true auto-gateway: true auto-routes: true autoconf: true dhcp: true", "nmstatectl apply ~/create-ethernet-profile.yml", "nmstatectl show enp1s0", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 interface_name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "nmcli connection add con-name \"Wired connection 1\" connection.multi-connect multiple match.interface-name enp* type ethernet", "nmcli connection show \"Wired connection 1\" connection.id: Wired connection 1 connection.multi-connect: 3 (multiple) match.interface-name: enp*", "nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 6f22402e-c0cc-49cf-b702-eaf0cd5ea7d1 ethernet enp7s0 Wired connection 1 6f22402e-c0cc-49cf-b702-eaf0cd5ea7d1 ethernet enp8s0 Wired connection 1 6f22402e-c0cc-49cf-b702-eaf0cd5ea7d1 ethernet enp9s0", "udevadm info /sys/class/net/enp* | grep ID_PATH= E: ID_PATH=pci-0000:07:00.0 E: ID_PATH=pci-0000:08:00.0", "nmcli connection add type ethernet connection.multi-connect multiple match.path \"pci-0000:07:00.0 pci-0000:08:00.0\" con-name \"Wired connection 1\"", "nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 9cee0958-512f-4203-9d3d-b57af1d88466 ethernet enp7s0 Wired connection 1 9cee0958-512f-4203-9d3d-b57af1d88466 ethernet enp8s0", "nmcli connection show \"Wired connection 1\" connection.id: Wired connection 1 connection.multi-connect: 3 (multiple) match.path: pci-0000:07:00.0,pci-0000:08:00.0", "nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup\"", "nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup,miimon=1000\"", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet disconnected -- enp8s0 ethernet disconnected -- bridge0 bridge connected bridge0 bridge1 bridge connected bridge1", "nmcli connection add type ethernet slave-type bond con-name bond0-port1 ifname enp7s0 master bond0 nmcli connection add type ethernet slave-type bond con-name bond0-port2 ifname enp8s0 master bond0", "nmcli connection modify bridge0 master bond0 nmcli connection modify bridge1 master bond0", "nmcli connection up bridge0 nmcli connection up bridge1", "nmcli connection modify bond0 ipv4.method disabled", "nmcli connection modify bond0 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.dns-search 'example.com' ipv4.method manual", "nmcli connection modify bond0 ipv6.method disabled", "nmcli connection modify bond0 ipv6.addresses '2001:db8:1::1/64' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd' ipv6.dns-search 'example.com' ipv6.method manual", "nmcli connection modify bond0-port1 bond-port. <parameter> <value>", "nmcli connection up bond0", "nmcli device DEVICE TYPE STATE CONNECTION enp7s0 ethernet connected bond0-port1 enp8s0 ethernet connected bond0-port2", "nmcli connection modify bond0 connection.autoconnect-slaves 1", "nmcli connection up bond0", "cat /proc/net/bonding/bond0", "cat /proc/net/bonding/bond0", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet unavailable -- enp8s0 ethernet unavailable --", "nmtui", "cat /proc/net/bonding/bond0", "nm-connection-editor", "cat /proc/net/bonding/ bond0", "--- interfaces: - name: bond0 type: bond state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false link-aggregation: mode: active-backup port: - enp1s0 - enp7s0 - name: enp1s0 type: ethernet state: up - name: enp7s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: bond0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: bond0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-bond.yml", "nmcli device status DEVICE TYPE STATE CONNECTION bond0 bond connected bond0", "nmcli connection show bond0 connection.id: bond0 connection.uuid: 79cbc3bd-302e-4b1f-ad89-f12533b818ee connection.stable-id: -- connection.type: bond connection.interface-name: bond0", "nmstatectl show bond0", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bond connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bond profile - name: bond0 type: bond interface_name: bond0 ip: dhcp4: yes auto6: yes bond: mode: active-backup state: up # Port profile for the 1st Ethernet device - name: bond0-port1 interface_name: enp7s0 type: ethernet controller: bond0 state: up # Port profile for the 2nd Ethernet device - name: bond0-port2 interface_name: enp8s0 type: ethernet controller: bond0 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup\"", "nmcli connection modify bond0 ipv4.addresses ' 192.0.2.1/24 ' nmcli connection modify bond0 ipv4.gateway ' 192.0.2.254 ' nmcli connection modify bond0 ipv4.dns ' 192.0.2.253 ' nmcli connection modify bond0 ipv4.dns-search ' example.com ' nmcli connection modify bond0 ipv4.method manual", "nmcli connection modify bond0 ipv6.addresses ' 2001:db8:1::1/64 ' nmcli connection modify bond0 ipv6.gateway ' 2001:db8:1::fffe ' nmcli connection modify bond0 ipv6.dns ' 2001:db8:1::fffd ' nmcli connection modify bond0 ipv6.dns-search ' example.com ' nmcli connection modify bond0 ipv6.method manual", "nmcli connection show NAME UUID TYPE DEVICE Docking_station 256dd073-fecc-339d-91ae-9834a00407f9 ethernet enp11s0u1 Wi-Fi 1f1531c7-8737-4c60-91af-2d21164417e8 wifi wlp1s0", "nmcli connection modify Docking_station master bond0", "nmcli connection modify Wi-Fi master bond0", "nmcli connection modify bond0 +bond.options fail_over_mac=1", "nmcli con modify bond0 +bond.options \"primary=enp11s0u1\"", "nmcli connection modify bond0 connection.autoconnect-slaves 1", "nmcli connection up bond0", "cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) (fail_over_mac active) Primary Slave: enp11s0u1 (primary_reselect always) Currently Active Slave: enp11s0u1 MII Status: up MII Polling Interval (ms): 1 Up Delay (ms): 0 Down Delay (ms): 0 Peer Notification Delay (ms): 0 Slave Interface: enp11s0u1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:53:00:59:da:b7 Slave queue ID: 0 Slave Interface: wlp1s0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 2 Permanent HW addr: 00:53:00:b3:22:ba Slave queue ID: 0", "nmcli connection add type team con-name team0 ifname team0 team.runner activebackup", "nmcli connection modify team0 team.link-watchers \"name= ethtool \"", "nmcli connection modify team0 team.link-watchers \"name= ethtool delay-up= 2500 \"", "nmcli connection modify team0 team.link-watchers \"name= ethtool delay-up= 2 , name= arp_ping source-host= 192.0.2.1 target-host= 192.0.2.2 \"", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet disconnected -- enp8s0 ethernet disconnected -- bond0 bond connected bond0 bond1 bond connected bond1", "nmcli connection add type ethernet slave-type team con-name team0-port1 ifname enp7s0 master team0 nmcli connection add type ethernet slave--type team con-name team0-port2 ifname enp8s0 master team0", "nmcli connection modify bond0 master team0 nmcli connection modify bond1 master team0", "nmcli connection up bond0 nmcli connection up bond1", "nmcli connection modify team0 ipv4.method disabled", "nmcli connection modify team0 ipv4.addresses ' 192.0.2.1/24 ' ipv4.gateway ' 192.0.2.254 ' ipv4.dns ' 192.0.2.253 ' ipv4.dns-search ' example.com ' ipv4.method manual", "nmcli connection modify team0 ipv6.method disabled", "nmcli connection modify team0 ipv6.addresses ' 2001:db8:1::1/64 ' ipv6.gateway ' 2001:db8:1::fffe ' ipv6.dns ' 2001:db8:1::fffd ' ipv6.dns-search ' example.com ' ipv6.method manual", "nmcli connection up team0", "teamdctl team0 state setup: runner: activebackup ports: enp7s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 enp8s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp7s0", "teamdctl team0 state setup: runner: activebackup ports: enp7s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 enp8s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp7s0", "nm-connection-editor", "teamdctl team0 state setup: runner: activebackup ports: enp7s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 enp8s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp7s0", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet disconnected enp1s0 bridge0 bridge connected bridge0 bond0 bond connected bond0", "nmcli connection add type vlan con-name vlan10 ifname vlan10 vlan.parent enp1s0 vlan.id 10", "nmcli connection modify vlan10 ethernet.mtu 2000", "nmcli connection modify vlan10 ipv4.method disabled", "nmcli connection modify vlan10 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.method manual", "nmcli connection modify vlan10 ipv6.method disabled", "nmcli connection modify vlan10 ipv6.addresses '2001:db8:1::1/32' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd' ipv6.method manual", "nmcli connection up vlan10", "ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unavailable --", "nmtui", "ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nm-connection-editor", "ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:d5:e0:fb brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "--- interfaces: - name: vlan10 type: vlan state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false vlan: base-iface: enp1s0 id: 10 - name: enp1s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: vlan10 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: vlan10 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-vlan.yml", "nmcli device status DEVICE TYPE STATE CONNECTION vlan10 vlan connected vlan10", "nmcli connection show vlan10 connection.id: vlan10 connection.uuid: 1722970f-788e-4f81-bd7d-a86bf21c9df5 connection.stable-id: -- connection.type: vlan connection.interface-name: vlan10", "nmstatectl show vlan0", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: VLAN connection profile with Ethernet port ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Ethernet profile - name: enp1s0 type: ethernet interface_name: enp1s0 autoconnect: yes state: up ip: dhcp4: no auto6: no # VLAN profile - name: enp1s0.10 type: vlan vlan: id: 10 ip: dhcp4: yes auto6: yes parent: enp1s0 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip -d addr show enp1s0.10' managed-node-01.example.com | CHANGED | rc=0 >> 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535", "nmcli connection add type bridge con-name bridge0 ifname bridge0", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet disconnected -- enp8s0 ethernet disconnected -- bond0 bond connected bond0 bond1 bond connected bond1", "nmcli connection add type ethernet slave-type bridge con-name bridge0-port1 ifname enp7s0 master bridge0 nmcli connection add type ethernet slave-type bridge con-name bridge0-port2 ifname enp8s0 master bridge0", "nmcli connection modify bond0 master bridge0 nmcli connection modify bond1 master bridge0", "nmcli connection up bond0 nmcli connection up bond1", "nmcli connection modify bridge0 ipv4.method disabled", "nmcli connection modify bridge0 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.dns-search 'example.com' ipv4.method manual", "nmcli connection modify bridge0 ipv6.method disabled", "nmcli connection modify bridge0 ipv6.addresses '2001:db8:1::1/64' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd' ipv6.dns-search 'example.com' ipv6.method manual", "nmcli connection modify bridge0 bridge.priority '16384'", "nmcli connection up bridge0", "nmcli device DEVICE TYPE STATE CONNECTION enp7s0 ethernet connected bridge0-port1 enp8s0 ethernet connected bridge0-port2", "nmcli connection modify bridge0 connection.autoconnect-slaves 1", "nmcli connection up bridge0", "ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100 5: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state forwarding priority 32 cost 100 6: enp11s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state blocking priority 32 cost 100", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet unavailable -- enp8s0 ethernet unavailable --", "nmtui", "ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100", "nm-connection-editor", "ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100 5: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state forwarding priority 32 cost 100 6: enp11s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state blocking priority 32 cost 100", "--- interfaces: - name: bridge0 type: linux-bridge state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false bridge: options: stp: enabled: true port: - name: enp1s0 - name: enp7s0 - name: enp1s0 type: ethernet state: up - name: enp7s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: bridge0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: bridge0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-bridge.yml", "nmcli device status DEVICE TYPE STATE CONNECTION bridge0 bridge connected bridge0", "nmcli connection show bridge0 connection.id: bridge0_ connection.uuid: e2cc9206-75a2-4622-89cf-1252926060a9 connection.stable-id: -- connection.type: bridge connection.interface-name: bridge0", "nmstatectl show bridge0", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bridge connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bridge profile - name: bridge0 type: bridge interface_name: bridge0 ip: dhcp4: yes auto6: yes state: up # Port profile for the 1st Ethernet device - name: bridge0-port1 interface_name: enp7s0 type: ethernet controller: bridge0 port_type: bridge state: up # Port profile for the 2nd Ethernet device - name: bridge0-port2 interface_name: enp8s0 type: ethernet controller: bridge0 port_type: bridge state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip link show master bridge0' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "ansible managed-node-01.example.com -m command -a 'bridge link show' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100", "@west @east : PPKS \"user1\" \"thestringismeanttobearandomstr\"", "yum install libreswan", "systemctl stop ipsec rm /etc/ipsec.d/*db ipsec initnss", "systemctl enable ipsec --now", "firewall-cmd --add-service=\"ipsec\" firewall-cmd --runtime-to-permanent", "ipsec newhostkey", "ipsec showhostkey --left --ckaid 2d3ea57b61c9419dfd6cf43a1eb6cb306c0e857d", "ipsec showhostkey --right --ckaid a9e1f6ce9ecd3608c24e8f701318383f41798f03", "conn mytunnel leftid=@west left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== rightid=@east right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig", "systemctl restart ipsec", "ipsec auto --add mytunnel", "ipsec auto --up mytunnel", "auto=start", "cp /etc/ipsec.d/ my_host-to-host.conf /etc/ipsec.d/ my_site-to-site .conf", "conn mysubnet also=mytunnel leftsubnet=192.0.1.0/24 rightsubnet=192.0.2.0/24 auto=start conn mysubnet6 also=mytunnel leftsubnet=2001:db8:0:1::/64 rightsubnet=2001:db8:0:2::/64 auto=start the following part of the configuration file is the same for both host-to-host and site-to-site connections: conn mytunnel leftid=@west left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== rightid=@east right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig", "conn roadwarriors ikev2=insist # support (roaming) MOBIKE clients (RFC 4555) mobike=yes fragmentation=yes left=1.2.3.4 # if access to the LAN is given, enable this, otherwise use 0.0.0.0/0 # leftsubnet=10.10.0.0/16 leftsubnet=0.0.0.0/0 leftcert=gw.example.com leftid=%fromcert leftxauthserver=yes leftmodecfgserver=yes right=%any # trust our own Certificate Agency rightca=%same # pick an IP address pool to assign to remote users # 100.64.0.0/16 prevents RFC1918 clashes when remote users are behind NAT rightaddresspool=100.64.13.100-100.64.13.254 # if you want remote clients to use some local DNS zones and servers modecfgdns=\"1.2.3.4, 5.6.7.8\" modecfgdomains=\"internal.company.com, corp\" rightxauthclient=yes rightmodecfgclient=yes authby=rsasig # optionally, run the client X.509 ID through pam to allow or deny client # pam-authorize=yes # load connection, do not initiate auto=add # kill vanished roadwarriors dpddelay=1m dpdtimeout=5m dpdaction=clear", "conn to-vpn-server ikev2=insist # pick up our dynamic IP left=%defaultroute leftsubnet=0.0.0.0/0 leftcert=myname.example.com leftid=%fromcert leftmodecfgclient=yes # right can also be a DNS hostname right=1.2.3.4 # if access to the remote LAN is required, enable this, otherwise use 0.0.0.0/0 # rightsubnet=10.10.0.0/16 rightsubnet=0.0.0.0/0 fragmentation=yes # trust our own Certificate Agency rightca=%same authby=rsasig # allow narrowing to the server's suggested assigned IP and remote subnet narrowing=yes # support (roaming) MOBIKE clients (RFC 4555) mobike=yes # initiate connection auto=start", "systemctl stop ipsec rm /etc/ipsec.d/*db", "ipsec initnss", "ipsec import nodeXXX.p12", "cat /etc/ipsec.d/mesh.conf conn clear auto=ondemand 1 type=passthrough authby=never left=%defaultroute right=%group conn private auto=ondemand type=transport authby=rsasig failureshunt=drop negotiationshunt=drop ikev2=insist left=%defaultroute leftcert= nodeXXXX leftid=%fromcert 2 rightid=%fromcert right=%opportunisticgroup conn private-or-clear auto=ondemand type=transport authby=rsasig failureshunt=passthrough negotiationshunt=passthrough # left left=%defaultroute leftcert= nodeXXXX 3 leftid=%fromcert leftrsasigkey=%cert # right rightrsasigkey=%cert rightid=%fromcert right=%opportunisticgroup", "echo \"10.15.0.0/16\" >> /etc/ipsec.d/policies/private", "echo \"10.15.34.0/24\" >> /etc/ipsec.d/policies/private-or-clear", "echo \"10.15.1.2/32\" >> /etc/ipsec.d/policies/clear", "systemctl restart ipsec", "ping <nodeYYY>", "certutil -L -d sql:/etc/ipsec.d Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI west u,u,u ca CT,,", "ipsec trafficstatus 006 #2: \"private#10.15.0.0/16\"[1] ... <nodeYYY> , type=ESP, add_time=1691399301, inBytes=512, outBytes=512, maxBytes=2^63B, id='C=US, ST=NC, O=Example Organization, CN=east'", "yum install libreswan", "systemctl stop ipsec rm /etc/ipsec.d/*db", "systemctl enable ipsec --now", "firewall-cmd --add-service=\"ipsec\" firewall-cmd --runtime-to-permanent", "fips-mode-setup --enable", "reboot", "ipsec whack --fipsstatus 000 FIPS mode enabled", "journalctl -u ipsec Jan 22 11:26:50 localhost.localdomain pluto[3076]: FIPS Product: YES Jan 22 11:26:50 localhost.localdomain pluto[3076]: FIPS Kernel: YES Jan 22 11:26:50 localhost.localdomain pluto[3076]: FIPS Mode: YES", "ipsec pluto --selftest 2>&1 | head -11 FIPS Product: YES FIPS Kernel: YES FIPS Mode: YES NSS DB directory: sql:/etc/ipsec.d Initializing NSS Opening NSS database \"sql:/etc/ipsec.d\" read-only NSS initialized NSS crypto library initialized FIPS HMAC integrity support [enabled] FIPS mode enabled for pluto daemon NSS library is running in FIPS mode FIPS HMAC integrity verification self-test passed", "ipsec pluto --selftest 2>&1 | grep disabled Encryption algorithm CAMELLIA_CTR disabled; not FIPS compliant Encryption algorithm CAMELLIA_CBC disabled; not FIPS compliant Encryption algorithm SERPENT_CBC disabled; not FIPS compliant Encryption algorithm TWOFISH_CBC disabled; not FIPS compliant Encryption algorithm TWOFISH_SSH disabled; not FIPS compliant Encryption algorithm NULL disabled; not FIPS compliant Encryption algorithm CHACHA20_POLY1305 disabled; not FIPS compliant Hash algorithm MD5 disabled; not FIPS compliant PRF algorithm HMAC_MD5 disabled; not FIPS compliant PRF algorithm AES_XCBC disabled; not FIPS compliant Integrity algorithm HMAC_MD5_96 disabled; not FIPS compliant Integrity algorithm HMAC_SHA2_256_TRUNCBUG disabled; not FIPS compliant Integrity algorithm AES_XCBC_96 disabled; not FIPS compliant DH algorithm MODP1024 disabled; not FIPS compliant DH algorithm MODP1536 disabled; not FIPS compliant DH algorithm DH31 disabled; not FIPS compliant", "ipsec pluto --selftest 2>&1 | grep ESP | grep FIPS | sed \"s/^.*FIPS//\" {256,192,*128} aes_ccm, aes_ccm_c {256,192,*128} aes_ccm_b {256,192,*128} aes_ccm_a [*192] 3des {256,192,*128} aes_gcm, aes_gcm_c {256,192,*128} aes_gcm_b {256,192,*128} aes_gcm_a {256,192,*128} aesctr {256,192,*128} aes {256,192,*128} aes_gmac sha, sha1, sha1_96, hmac_sha1 sha512, sha2_512, sha2_512_256, hmac_sha2_512 sha384, sha2_384, sha2_384_192, hmac_sha2_384 sha2, sha256, sha2_256, sha2_256_128, hmac_sha2_256 aes_cmac null null, dh0 dh14 dh15 dh16 dh17 dh18 ecp_256, ecp256 ecp_384, ecp384 ecp_521, ecp521", "certutil -N -d sql:/etc/ipsec.d Enter Password or Pin for \"NSS Certificate DB\": Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password:", "cat /etc/ipsec.d/nsspassword NSS Certificate DB:_<password>_", "<token_1> : <password1> <token_2> : <password2>", "systemctl restart ipsec", "systemctl status ipsec ● ipsec.service - Internet Key Exchange (IKE) Protocol Daemon for IPsec Loaded: loaded (/usr/lib/systemd/system/ipsec.service; enabled; vendor preset: disable> Active: active (running)", "journalctl -u ipsec pluto[6214]: Initializing NSS using read-write database \"sql:/etc/ipsec.d\" pluto[6214]: NSS Password from file \"/etc/ipsec.d/nsspassword\" for token \"NSS Certificate DB\" with length 20 passed to NSS pluto[6214]: NSS crypto library initialized", "listen-tcp=yes", "enable-tcp=fallback tcp-remoteport=4500", "enable-tcp=yes tcp-remoteport=4500", "systemctl restart ipsec", "ethtool -S enp1s0 | grep -E \"_ipsec\" tx_ipsec: 10 rx_ipsec: 10", "ping -c 5 remote_ip_address", "ethtool -S enp1s0 | grep -E \"_ipsec\" tx_ipsec: 15 rx_ipsec: 15", "nmcli connection modify bond0 ethtool.feature-esp-hw-offload on", "nmcli connection up bond0", "conn example nic-offload=yes", "systemctl restart ipsec", "grep \"Currently Active Slave\" /proc/net/bonding/ bond0 Currently Active Slave: enp1s0", "ethtool -S enp1s0 | grep -E \"_ipsec\" tx_ipsec: 10 rx_ipsec: 10", "ping -c 5 remote_ip_address", "ethtool -S enp1s0 | grep -E \"_ipsec\" tx_ipsec: 15 rx_ipsec: 15", "- name: Host to host VPN hosts: managed-node-01.example.com, managed-node-02.example.com roles: - rhel-system-roles.vpn vars: vpn_connections: - hosts: managed-node-01.example.com: managed-node-02.example.com: vpn_manage_firewall: true vpn_manage_selinux: true", "vpn_connections: - hosts: managed-node-01.example.com: <external_node> : hostname: <IP_address_or_hostname>", "- name: Multiple VPN hosts: managed-node-01.example.com, managed-node-02.example.com roles: - rhel-system-roles.vpn vars: vpn_connections: - name: control_plane_vpn hosts: managed-node-01.example.com: hostname: 192.0.2.0 # IP for the control plane managed-node-02.example.com: hostname: 192.0.2.1 - name: data_plane_vpn hosts: managed-node-01.example.com: hostname: 10.0.0.1 # IP for the data plane managed-node-02.example.com: hostname: 10.0.0.2", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ipsec status | grep <connection_name>", "ipsec trafficstatus | grep <connection_name>", "ipsec auto --add <connection_name>", "- name: Mesh VPN hosts: managed-node-01.example.com, managed-node-02.example.com, managed-node-03.example.com roles: - rhel-system-roles.vpn vars: vpn_connections: - opportunistic: true auth_method: cert policies: - policy: private cidr: default - policy: private-or-clear cidr: 198.51.100.0/24 - policy: private cidr: 192.0.2.0/24 - policy: clear cidr: 192.0.2.7/32 vpn_manage_firewall: true vpn_manage_selinux: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "conn MyExample ikev2=never ike=aes-sha2,aes-sha1;modp2048 esp=aes_gcm,aes-sha2,aes-sha1", "include /etc/crypto-policies/back-ends/libreswan.config", "ipsec trafficstatus 006 #8: \"vpn.example.com\"[1] 192.0.2.1, type=ESP, add_time=1595296930, inBytes=5999, outBytes=3231, id='@vpn.example.com', lease=100.64.13.5/32", "ipsec auto --add vpn.example.com 002 added connection description \"vpn.example.com\"", "ipsec auto --up vpn.example.com", "ipsec auto --up vpn.example.com 181 \"vpn.example.com\"[1] 192.0.2.2 #15: initiating IKEv2 IKE SA 181 \"vpn.example.com\"[1] 192.0.2.2 #15: STATE_PARENT_I1: sent v2I1, expected v2R1 010 \"vpn.example.com\"[1] 192.0.2.2 #15: STATE_PARENT_I1: retransmission; will wait 0.5 seconds for response 010 \"vpn.example.com\"[1] 192.0.2.2 #15: STATE_PARENT_I1: retransmission; will wait 1 seconds for response 010 \"vpn.example.com\"[1] 192.0.2.2 #15: STATE_PARENT_I1: retransmission; will wait 2 seconds for", "ipsec auto --up vpn.example.com 002 \"vpn.example.com\" #9: initiating Main Mode 102 \"vpn.example.com\" #9: STATE_MAIN_I1: sent MI1, expecting MR1 010 \"vpn.example.com\" #9: STATE_MAIN_I1: retransmission; will wait 0.5 seconds for response 010 \"vpn.example.com\" #9: STATE_MAIN_I1: retransmission; will wait 1 seconds for response 010 \"vpn.example.com\" #9: STATE_MAIN_I1: retransmission; will wait 2 seconds for response", "tcpdump -i eth0 -n -n esp or udp port 500 or udp port 4500 or tcp port 4500", "ipsec auto --up vpn.example.com 000 \"vpn.example.com\"[1] 192.0.2.2 #16: ERROR: asynchronous network error report on wlp2s0 (192.0.2.2:500), complainant 198.51.100.1: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)]", "ipsec auto --up vpn.example.com 003 \"vpn.example.com\"[1] 193.110.157.148 #3: dropping unexpected IKE_SA_INIT message containing NO_PROPOSAL_CHOSEN notification; message payloads: N; missing payloads: SA,KE,Ni", "ipsec auto --up vpn.example.com 182 \"vpn.example.com\"[1] 193.110.157.148 #5: STATE_PARENT_I2: sent v2I2, expected v2R2 {auth=IKEv2 cipher=AES_GCM_16_256 integ=n/a prf=HMAC_SHA2_256 group=MODP2048} 002 \"vpn.example.com\"[1] 193.110.157.148 #6: IKE_AUTH response contained the error notification NO_PROPOSAL_CHOSEN", "ipsec auto --up vpn.example.com 1v2 \"vpn.example.com\" #1: STATE_PARENT_I2: sent v2I2, expected v2R2 {auth=IKEv2 cipher=AES_GCM_16_256 integ=n/a prf=HMAC_SHA2_512 group=MODP2048} 002 \"vpn.example.com\" #2: IKE_AUTH response contained the error notification TS_UNACCEPTABLE", "ipsec auto --up vpn.example.com 031 \"vpn.example.com\" #2: STATE_QUICK_I1: 60 second timeout exceeded after 0 retransmits. No acceptable response to our first Quick Mode message: perhaps peer likes no proposal", "ipsec auto --up vpn.example.com 003 \"vpn.example.com\" #1: received Hash Payload does not match computed value 223 \"vpn.example.com\" #1: sending notification INVALID_HASH_INFORMATION to 192.0.2.23:500", "ipsec auto --up vpn.example.com 002 \"vpn.example.com\" #1: IKE SA authentication request rejected by peer: AUTHENTICATION_FAILED", "iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu", "iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1380", "conn myvpn left=172.16.0.1 leftsubnet=10.0.2.0/24 right=172.16.0.2 rightsubnet=192.168.0.0/16 ...", "iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE", "iptables -t nat -I POSTROUTING -s 10.0.2.0/24 -d 192.168.0.0/16 -j RETURN", "cat /proc/net/xfrm_stat XfrmInError 0 XfrmInBufferError 0", "journalctl -b | grep pluto", "journalctl -f -u ipsec", "nm-connection-editor", "nmcli connection add type ip-tunnel ip-tunnel.mode ipip con-name tun0 ifname tun0 remote 198.51.100.5 local 203.0.113.10", "nmcli connection modify tun0 ipv4.addresses '10.0.1.1/30'", "nmcli connection modify tun0 ipv4.method manual", "nmcli connection modify tun0 +ipv4.routes \"172.16.0.0/24 10.0.1.2\"", "nmcli connection up tun0", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "nmcli connection add type ip-tunnel ip-tunnel.mode ipip con-name tun0 ifname tun0 remote 203.0.113.10 local 198.51.100.5", "nmcli connection modify tun0 ipv4.addresses '10.0.1.2/30'", "nmcli connection modify tun0 ipv4.method manual", "nmcli connection modify tun0 +ipv4.routes \"192.0.2.0/24 10.0.1.1\"", "nmcli connection up tun0", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "ping 172.16.0.1", "ping 192.0.2.1", "nmcli connection add type ip-tunnel ip-tunnel.mode gre con-name gre1 ifname gre1 remote 198.51.100.5 local 203.0.113.10", "nmcli connection modify gre1 ipv4.addresses '10.0.1.1/30'", "nmcli connection modify gre1 ipv4.method manual", "nmcli connection modify gre1 +ipv4.routes \"172.16.0.0/24 10.0.1.2\"", "nmcli connection up gre1", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "nmcli connection add type ip-tunnel ip-tunnel.mode gre con-name gre1 ifname gre1 remote 203.0.113.10 local 198.51.100.5", "nmcli connection modify gre1 ipv4.addresses '10.0.1.2/30'", "nmcli connection modify gre1 ipv4.method manual", "nmcli connection modify gre1 +ipv4.routes \"192.0.2.0/24 10.0.1.1\"", "nmcli connection up gre1", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "ping 172.16.0.1", "ping 192.0.2.1", "nmcli connection add type bridge con-name bridge0 ifname bridge0", "nmcli connection modify bridge0 ipv4.addresses '192.0.2.1/24' nmcli connection modify bridge0 ipv4.method manual", "nmcli connection add type ethernet slave-type bridge con-name bridge0-port1 ifname enp1s0 master bridge0", "nmcli connection add type ip-tunnel ip-tunnel.mode gretap slave-type bridge con-name bridge0-port2 ifname gretap1 remote 198.51.100.5 local 203.0.113.10 master bridge0", "nmcli connection modify bridge0 bridge.stp no", "nmcli connection modify bridge0 connection.autoconnect-slaves 1", "nmcli connection up bridge0", "nmcli connection add type bridge con-name bridge0 ifname bridge0", "nmcli connection modify bridge0 ipv4.addresses '192.0.2.2/24' nmcli connection modify bridge0 ipv4.method manual", "nmcli connection add type ethernet slave-type bridge con-name bridge0-port1 ifname enp1s0 master bridge0", "nmcli connection add type ip-tunnel ip-tunnel.mode gretap slave-type bridge con-name bridge0-port2 ifname gretap1 remote 203.0.113.10 local 198.51.100.5 master bridge0", "nmcli connection modify bridge0 bridge.stp no", "nmcli connection modify bridge0 connection.autoconnect-slaves 1", "nmcli connection up bridge0", "nmcli device nmcli device DEVICE TYPE STATE CONNECTION bridge0 bridge connected bridge0 enp1s0 ethernet connected bridge0-port1 gretap1 iptunnel connected bridge0-port2", "ping 192.0.2.2", "ping 192.0.2.1", "nmcli connection add con-name Example ifname enp1s0 type ethernet", "nmcli connection modify Example ipv4.addresses 198.51.100.2/24 ipv4.method manual ipv4.gateway 198.51.100.254 ipv4.dns 198.51.100.200 ipv4.dns-search example.com", "nmcli connection up Example", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet connected Example", "ping RHEL-host-B.example.com", "nmcli connection add type bridge con-name br0 ifname br0 ipv4.method disabled ipv6.method disabled", "nmcli connection add type vxlan slave-type bridge con-name br0-vxlan10 ifname vxlan10 id 10 local 198.51.100.2 remote 203.0.113.1 master br0", "nmcli connection up br0", "firewall-cmd --permanent --add-port=8472/udp firewall-cmd --reload", "bridge fdb show dev vxlan10 2a:53:bd:d5:b3:0a master br0 permanent 00:00:00:00:00:00 dst 203.0.113.1 self permanent", "<network> <name>vxlan10-bridge</name> <forward mode=\"bridge\" /> <bridge name=\"br0\" /> </network>", "virsh net-define ~/vxlan10-bridge.xml", "rm ~/vxlan10-bridge.xml", "virsh net-start vxlan10-bridge", "virsh net-autostart vxlan10-bridge", "virsh net-list Name State Autostart Persistent ---------------------------------------------------- vxlan10-bridge active yes yes", "virt-install ... --network network: vxlan10-bridge", "virt-xml VM_name --edit --network network= vxlan10-bridge", "virsh shutdown VM_name virsh start VM_name", "virsh domiflist VM_name Interface Type Source Model MAC ------------------------------------------------------------------- vnet1 bridge vxlan10-bridge virtio 52:54:00:c5:98:1c", "ip link show master vxlan10-bridge 18: vxlan10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 2a:53:bd:d5:b3:0a brd ff:ff:ff:ff:ff:ff 19: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:c5:98:1c brd ff:ff:ff:ff:ff:ff", "arping -c 1 192.0.2.2 ARPING 192.0.2.2 from 192.0.2.1 enp1s0 Unicast reply from 192.0.2.2 [ 52:54:00:c5:98:1c ] 1.450ms Sent 1 probe(s) (0 broadcast(s)) Received 1 response(s) (0 request(s), 0 broadcast(s))", "nmcli radio wifi on", "nmcli device wifi list IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY 00:53:00:2F:3B:08 Office Infra 44 270 Mbit/s 57 ▂▄▆_ WPA2 WPA3 00:53:00:15:03:BF -- Infra 1 130 Mbit/s 48 ▂▄__ WPA2 WPA3", "nmcli device wifi connect Office --ask Password: wifi-password", "nmcli device wifi connect Office password <wifi_password>", "nmcli connection modify Office ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200 ipv4.dns-search example.com", "nmcli connection modify Office ipv6.method manual ipv6.addresses 2001:db8:1::1/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::ffbb ipv6.dns-search example.com", "nmcli connection up Office", "nmcli connection show --active NAME ID TYPE DEVICE Office 2501eb7e-7b16-4dc6-97ef-7cc460139a58 wifi wlp0s20f3", "*ping -c 3 example.com", "ping -c 3 example.com", "ping -c 3 example.com", "nmtui", "nmcli connection show --active NAME ID TYPE DEVICE Office 2501eb7e-7b16-4dc6-97ef-7cc460139a58 wifi wlp0s20f3", "ping -c 3 example.com", "nm-connection-editor", "ping -c 3 example.com", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Configure a wifi connection with 802.1X authentication hosts: managed-node-01.example.com tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.key\" dest: \"/etc/pki/tls/private/client.key\" mode: 0400 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.crt\" dest: \"/etc/pki/tls/certs/client.crt\" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/ca.crt\" dest: \"/etc/pki/ca-trust/source/anchors/ca.crt\" - name: Wifi connection profile with dynamic IP address settings and 802.1X ansible.builtin.import_role: name: rhel-system-roles.network vars: network_connections: - name: Wifi connection profile with dynamic IP address settings and 802.1X interface_name: wlp1s0 state: up type: wireless autoconnect: yes ip: dhcp4: true auto6: true wireless: ssid: \"Example-wifi\" key_mgmt: \"wpa-eap\" ieee802_1x: identity: <user_name> eap: tls private_key: \"/etc/pki/tls/client.key\" private_key_password: \"{{ pwd }}\" private_key_password_flags: none client_cert: \"/etc/pki/tls/client.pem\" ca_cert: \"/etc/pki/tls/cacert.pem\" domain_suffix_match: \"example.com\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "nmcli connection modify wlp1s0 wireless-security.key-mgmt wpa-eap 802-1x.eap peap 802-1x.phase2-auth mschapv2 802-1x.identity user_name", "nmcli connection modify wlp1s0 802-1x.password password", "nmcli connection modify wlp1s0 802-1x.ca-cert /etc/pki/ca-trust/source/anchors/ca.crt", "nmcli connection up wlp1s0", "iw reg get global country US: DFS-FCC", "COUNTRY= <country_code>", "setregdomain", "iw reg get global country DE: DFS-ETSI", "nmcli device status | grep wifi wlp0s20f3 wifi disconnected --", "nmcli -f WIFI-PROPERTIES.AP device show wlp0s20f3 WIFI-PROPERTIES.AP: yes", "yum install dnsmasq NetworkManager-wifi", "nmcli device wifi hotspot ifname wlp0s20f3 con-name Example-Hotspot ssid Example-Hotspot password \" password \"", "nmcli connection modify Example-Hotspot 802-11-wireless-security.key-mgmt sae", "nmcli connection modify Example-Hotspot ipv4.addresses 192.0.2.254/24", "nmcli connection up Example-Hotspot", "ss -tulpn | grep -E \":53|:67\" udp UNCONN 0 0 10.42.0.1 :53 0.0.0.0:* users:((\"dnsmasq\",pid= 55905 ,fd= 6 )) udp UNCONN 0 0 0.0.0.0:67 0.0.0.0:* users:((\"dnsmasq\",pid= 55905 ,fd= 4 )) tcp LISTEN 0 32 10.42.0.1 :53 0.0.0.0:* users:((\"dnsmasq\",pid= 55905 ,fd= 7 ))", "nft list ruleset table ip nm-shared-wlp0s20f3 { chain nat_postrouting { type nat hook postrouting priority srcnat; policy accept; ip saddr 10.42.0.0/24 ip daddr != 10.42.0.0/24 masquerade } chain filter_forward { type filter hook forward priority filter; policy accept; ip daddr 10.42.0.0/24 oifname \" wlp0s20f3 \" ct state { established, related } accept ip saddr 10.42.0.0/24 iifname \" wlp0s20f3 \" accept iifname \" wlp0s20f3 \" oifname \" wlp0s20f3 \" accept iifname \" wlp0s20f3 \" reject oifname \" wlp0s20f3 \" reject } }", "nmcli device wifi IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY 00:53:00:88:29:04 Example-Hotspot Infra 11 130 Mbit/s 62 ▂▄▆_ WPA3", "ping -c 3 www.redhat.com", "dd if=/dev/urandom count=16 bs=1 2> /dev/null | hexdump -e '1/2 \"%04x\"' 50b71a8ef0bd5751ea76de6d6c98c03a", "dd if=/dev/urandom count=32 bs=1 2> /dev/null | hexdump -e '1/2 \"%04x\"' f2b4297d39da7330910a74abc0449feb45b5c0b9fc23df1430e1898fcf1c4550", "nmcli connection add type macsec con-name macsec0 ifname macsec0 connection.autoconnect yes macsec.parent enp1s0 macsec.mode psk macsec.mka-cak 50b71a8ef0bd5751ea76de6d6c98c03a macsec.mka-ckn f2b4297d39da7330910a74abc0449feb45b5c0b9fc23df1430e1898fcf1c4550", "nmcli connection modify macsec0 ipv4.method manual ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253'", "nmcli connection modify macsec0 ipv6.method manual ipv6.addresses '2001:db8:1::1/32' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd'", "nmcli connection up macsec0", "tcpdump -nn -i enp1s0", "tcpdump -nn -i macsec0", "ip macsec show", "ip -s macsec show", "ip link add link real_NIC_device name IPVLAN_device type ipvlan mode l2", "ip link add link enp0s31f6 name my_ipvlan type ipvlan mode l2 ip link 47: my_ipvlan@enp0s31f6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether e8:6a:6e:8a:a2:44 brd ff:ff:ff:ff:ff:ff", "ip addr add dev IPVLAN_device IP_address/subnet_mask_prefix", "ip neigh add dev peer_device IPVLAN_device_IP_address lladdr MAC_address", "ip route add dev <real_NIC_device> <peer_IP_address/32>", "ip route add dev real_NIC_device peer_IP_address/32", "ip link set dev IPVLAN_device up", "ping IP_address", "ip link show 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:74:79:56 brd ff:ff:ff:ff:ff:ff", "[keyfile] unmanaged-devices=interface-name:enp1s0", "[keyfile] unmanaged-devices=mac:52:54:00:74:79:56", "[keyfile] unmanaged-devices=type:ethernet", "[keyfile] unmanaged-devices=interface-name:enp1s0;interface-name:enp7s0", "systemctl reload NetworkManager", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unmanaged --", "NetworkManager --print-config [keyfile] unmanaged-devices=interface-name:enp1s0", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet disconnected --", "nmcli device set enp1s0 managed no", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unmanaged --", "nmcli connection add con-name example-loopback type loopback", "nmcli connection modify example-loopback +ipv4.addresses 192.0.2.1/24", "nmcli con mod example-loopback loopback.mtu 16384", "nmcli connection modify example-loopback ipv4.dns 192.0.2.0", "nmcli connection up example-loopback", "ip address show lo 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16384 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 192.0.2.1/24 brd 192.0.2.255 scope global lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever", "cat /etc/resolv.conf nameserver 192.0.2.0", "nmcli connection add type dummy ifname dummy0 ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv6.method manual ipv6.addresses 2001:db8:2::1/64", "nmcli connection show NAME UUID TYPE DEVICE dummy-dummy0 aaf6eb56-73e5-4746-9037-eed42caa8a65 dummy dummy0", "nmcli connection show NAME UUID TYPE DEVICE Example 7a7e0151-9c18-4e6f-89ee-65bb2d64d365 ethernet enp1s0", "nmcli connection modify Example ipv6.method \"disabled\"", "nmcli connection up Example", "ip address show enp1s0 2: enp1s0 : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:6b:74:be brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.10.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever", "cat /proc/sys/net/ipv6/conf/ enp1s0 /disable_ipv6 1", "nmcli general hostname old-hostname.example.com", "nmcli general hostname new-hostname.example.com", "reboot", "systemctl restart <service_name>", "nmcli general hostname new-hostname.example.com", "hostnamectl status --static old-hostname.example.com", "hostnamectl set-hostname new-hostname.example.com", "reboot", "systemctl restart <service_name>", "hostnamectl status --static new-hostname.example.com", "[main] dhcp=dhclient", "yum install dhcp-client", "systemctl restart NetworkManager", "Apr 26 09:54:19 server NetworkManager[27748]: <info> [1650959659.8483] dhcp-init: Using DHCP client 'dhclient'", "nmcli connection modify <connection_name> ipv4.dhcp-timeout 30 ipv6.dhcp-timeout 30", "nmcli connection modify <connection_name> ipv4.may-fail <value>", "nmcli connection modify <connection_name> ipv6.may-fail <value>", "#!/bin/bash Run dhclient.exit-hooks.d scripts if [ -n \"USDDHCP4_DHCP_LEASE_TIME\" ] ; then if [ \"USD2\" = \"dhcp4-change\" ] || [ \"USD2\" = \"up\" ] ; then if [ -d /etc/dhcp/dhclient-exit-hooks.d ] ; then for f in /etc/dhcp/dhclient-exit-hooks.d/*.sh ; do if [ -x \"USD{f}\" ]; then . \"USD{f}\" fi done fi fi fi", "chown root:root /etc/NetworkManager/dispatcher.d/12-dhclient-down", "chmod 0700 /etc/NetworkManager/dispatcher.d/12-dhclient-down", "restorecon /etc/NetworkManager/dispatcher.d/12-dhclient-down", "[main] dns=none", "systemctl reload NetworkManager", "systemctl reload NetworkManager", "cat /etc/resolv.conf", "NetworkManager --print-config dns=none", "rm /etc/resolv.conf", "ln -s /etc/resolv.conf.manually-configured /etc/resolv.conf", "[connection]", "ipv4.dns-priority=200 ipv6.dns-priority=200", "systemctl reload NetworkManager", "nmcli connection show NAME UUID TYPE DEVICE Example_con_1 d17ee488-4665-4de2-b28a-48befab0cd43 ethernet enp1s0 Example_con_2 916e4f67-7145-3ffa-9f7b-e7cada8f6bf7 ethernet enp7s0", "nmcli connection modify <connection_name> ipv4.dns-priority 10 ipv6.dns-priority 10", "nmcli connection up <connection_name>", "cat /etc/resolv.conf", "yum install dnsmasq", "dns=dnsmasq", "systemctl reload NetworkManager", "journalctl -xeu NetworkManager Jun 02 13:30:17 <client_hostname>_ dnsmasq[5298]: using nameserver 198.51.100.7#53 for domain example.com", "yum install tcpdump", "tcpdump -i any port 53", "host -t A www.example.com host -t A www.redhat.com", "13:52:42.234533 IP server .43534 > 198.51.100.7 .domain: 50121+ [1au] A? www.example.com. (33) 13:52:57.753235 IP server .40864 > 192.0.2.1 .domain: 6906+ A? www.redhat.com. (33)", "cat /etc/resolv.conf nameserver 127.0.0.1", "ss -tulpn | grep \"127.0.0.1:53\" udp UNCONN 0 0 127.0.0.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=7340,fd=18)) tcp LISTEN 0 32 127.0.0.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=7340,fd=19))", "journalctl -u NetworkManager", "systemctl --now enable systemd-resolved", "dns=systemd-resolved", "systemctl reload NetworkManager", "resolvectl Link 2 ( enp1s0 ) Current Scopes: DNS Protocols: +DefaultRoute Current DNS Server: 192.0.2.1 DNS Servers: 192.0.2.1 Link 3 ( tun0 ) Current Scopes: DNS Protocols: -DefaultRoute Current DNS Server: 198.51.100.7 DNS Servers: 198.51.100.7 203.0.113.19 DNS Domain: example.com", "yum install tcpdump", "tcpdump -i any port 53", "host -t A www.example.com host -t A www.redhat.com", "13:52:42.234533 IP server .43534 > 198.51.100.7 .domain: 50121+ [1au] A? www.example.com. (33) 13:52:57.753235 IP server .40864 > 192.0.2.1 .domain: 6906+ A? www.redhat.com. (33)", "cat /etc/resolv.conf nameserver 127.0.0.53", "ss -tulpn | grep \"127.0.0.53\" udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:* users:((\"systemd-resolve\",pid=1050,fd=12)) tcp LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:((\"systemd-resolve\",pid=1050,fd=13))", "nmcli connection modify <connection_name> ipv4.gateway \" <IPv4_gateway_address> \"", "nmcli connection modify <connection_name> ipv6.gateway \" <IPv6_gateway_address> \"", "nmcli connection up <connection_name>", "ip -4 route default via 192.0.2.1 dev example proto static metric 100", "ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium", "nmcli connection edit <connection_name>", "nmcli> set ipv4.gateway \" <IPv4_gateway_address> \"", "nmcli> set ipv6.gateway \" <IPv6_gateway_address> \"", "nmcli> print ipv4.gateway: <IPv4_gateway_address> ipv6.gateway: <IPv6_gateway_address>", "nmcli> save persistent", "nmcli> activate <connection_name>", "nmcli> quit", "ip -4 route default via 192.0.2.1 dev example proto static metric 100", "ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium", "nm-connection-editor", "nmcli connection up example", "ip -4 route default via 192.0.2.1 dev example proto static metric 100", "ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium", "ip -4 route default via 192.0.2.1 dev example proto static metric 100", "ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium", "--- routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.1 next-hop-interface: enp1s0", "nmstatectl apply ~/set-default-gateway.yml", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 198.51.100.20/24 - 2001:db8:1::1/64 gateway4: 198.51.100.254 gateway6: 2001:db8:1::fffe dns: - 198.51.100.200 - 2001:db8:1::ffbb dns_search: - example.com state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"gateway\": \"198.51.100.254\", \"interface\": \"enp1s0\", }, \"ansible_default_ipv6\": { \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", }", "GATEWAY=192.0.2.1", "default via 192.0.2.1", "systemctl restart network", "nmcli connection modify <connection_name> ipv4.route-metric <value> ipv6.route-metric <value>", "nmcli connection modify <connection_name> ipv4.never-default yes ipv6.never-default yes", "nmcli connection up <connection_name>", "ip -4 route default via 192.0.2.1 dev enp1s0 proto static metric 101 default via 198.51.100.1 dev enp7s0 proto static metric 102", "ip -6 route default via 2001:db8:1::1 dev enp1s0 proto static metric 101 pref medium default via 2001:db8:2::1 dev enp7s0 proto static metric 102 pref medium", "nmcli -f GENERAL.CONNECTION,IP4.GATEWAY,IP6.GATEWAY device show enp1s0 GENERAL.CONNECTION: Corporate-LAN IP4.GATEWAY: 192.0.2.1 IP6.GATEWAY: 2001:db8:1::1 nmcli -f GENERAL.CONNECTION,IP4.GATEWAY,IP6.GATEWAY device show enp7s0 GENERAL.CONNECTION: Internet-Provider IP4.GATEWAY: 198.51.100.1 IP6.GATEWAY: 2001:db8:2::1", "nmcli connection modify Corporate-LAN ipv4.never-default yes ipv6.never-default yes", "nmcli connection up Corporate-LAN", "ip -4 route default via 198.51.100.1 dev enp7s0 proto static metric 102", "ip -6 route default via 2001:db8:2::1 dev enp7s0 proto static metric 102 pref medium", "nmcli connection modify connection_name ipv4.routes \" ip [/ prefix ] [ next_hop ] [ metric ] [ attribute = value ] [ attribute = value ] ...\"", "nmcli connection modify connection_name +ipv4.routes \" <route> \"", "nmcli connection modify connection_name -ipv4.routes \" <route> \"", "nmcli connection modify LAN +ipv4.routes \"198.51.100.0/24 192.0.2.10\"", "nmcli connection modify <connection_profile> +ipv4.routes \" <remote_network_1> / <subnet_mask_1> <gateway_1> , <remote_network_n> / <subnet_mask_n> <gateway_n> , ...\"", "nmcli connection modify LAN +ipv6.routes \"2001:db8:2::/64 2001:db8:1::10\"", "nmcli connection up LAN", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "nmtui", "ip route 192.0.2.0/24 via 198.51.100.1 dev example proto static metric 100", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "nm-connection-editor", "nmcli connection up example", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "nmcli connection edit example", "nmcli> set ipv4.routes 198.51.100.0/24 192.0.2.10", "nmcli> set ipv6.routes 2001:db8:2::/64 2001:db8:1::10", "nmcli> print ipv4.routes: { ip = 198.51.100.0/24 , nh = 192.0.2.10 } ipv6.routes: { ip = 2001:db8:2::/64 , nh = 2001:db8:1::10 }", "nmcli> save persistent", "nmcli> activate example", "nmcli> quit", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "--- routes: config: - destination: 198.51.100.0/24 next-hop-address: 192.0.2.10 next-hop-interface: enp1s0 - destination: 2001:db8:2::/64 next-hop-address: 2001:db8:1::10 next-hop-interface: enp1s0", "nmstatectl apply ~/add-static-route-to-enp1s0.yml", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp7s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com route: - network: 198.51.100.0 prefix: 24 gateway: 192.0.2.10 - network: 2001:db8:2:: prefix: 64 gateway: 2001:db8:1::10 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip -4 route' managed-node-01.example.com | CHANGED | rc=0 >> 198.51.100.0/24 via 192.0.2.10 dev enp7s0", "ansible managed-node-01.example.com -m command -a 'ip -6 route' managed-node-01.example.com | CHANGED | rc=0 >> 2001:db8:2::/64 via 2001:db8:1::10 dev enp7s0 metric 1024 pref medium", "ADDRESS0= 198.51.100.0 NETMASK0= 255.255.255.0 GATEWAY0= 192.0.2.10", "systemctl restart network", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "journalctl -u network", "198.51.100.0/24 via 192.0.2.10 dev enp1s0", "2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0", "systemctl restart network", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "journalctl -u network", "nmcli connection add type ethernet con-name Provider-A ifname enp7s0 ipv4.method manual ipv4.addresses 198.51.100.1/30 ipv4.gateway 198.51.100.2 ipv4.dns 198.51.100.200 connection.zone external", "nmcli connection add type ethernet con-name Provider-B ifname enp1s0 ipv4.method manual ipv4.addresses 192.0.2.1/30 ipv4.routes \"0.0.0.0/0 192.0.2.2 table=5000\" connection.zone external", "nmcli connection add type ethernet con-name Internal-Workstations ifname enp8s0 ipv4.method manual ipv4.addresses 10.0.0.1/24 ipv4.routes \"10.0.0.0/24 table=5000\" ipv4.routing-rules \"priority 5 from 10.0.0.0/24 table 5000\" connection.zone trusted", "nmcli connection add type ethernet con-name Servers ifname enp9s0 ipv4.method manual ipv4.addresses 203.0.113.1/24 connection.zone trusted", "yum install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms", "yum install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms", "ip rule list 0: from all lookup local 5 : from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default", "ip route list table 5000 0.0.0.0/0 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102", "firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0", "firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes", "--- - name: Configuring policy-based routing hosts: managed-node-01.example.com tasks: - name: Routing traffic from a specific subnet to a different default gateway ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: Provider-A interface_name: enp7s0 type: ethernet autoconnect: True ip: address: - 198.51.100.1/30 gateway4: 198.51.100.2 dns: - 198.51.100.200 state: up zone: external - name: Provider-B interface_name: enp1s0 type: ethernet autoconnect: True ip: address: - 192.0.2.1/30 route: - network: 0.0.0.0 prefix: 0 gateway: 192.0.2.2 table: 5000 state: up zone: external - name: Internal-Workstations interface_name: enp8s0 type: ethernet autoconnect: True ip: address: - 10.0.0.1/24 route: - network: 10.0.0.0 prefix: 24 table: 5000 routing_rule: - priority: 5 from: 10.0.0.0/24 table: 5000 state: up zone: trusted - name: Servers interface_name: enp9s0 type: ethernet autoconnect: True ip: address: - 203.0.113.1/24 state: up zone: trusted", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "yum install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms", "yum install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms", "ip rule list 0: from all lookup local 5 : from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default", "ip route list table 5000 0.0.0.0/0 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102", "firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0", "firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes", "192.0.2.0/24 via 198.51.100.1 table 1 203.0.113.0/24 via 198.51.100.2 table 2", "from 192.0.2.0/24 lookup 1 from 203.0.113.0/24 lookup 2", "1 Provider_A 2 Provider_B", "TYPE=Ethernet IPADDR=198.51.100.1 PREFIX=30 GATEWAY=198.51.100.2 DNS1=198.51.100.200 DEFROUTE=yes NAME=1_Provider-A DEVICE=enp7s0 ONBOOT=yes ZONE=external", "TYPE=Ethernet IPADDR=192.0.2.1 PREFIX=30 DEFROUTE=no NAME=2_Provider-B DEVICE=enp1s0 ONBOOT=yes ZONE=external", "0.0.0.0/0 via 192.0.2.2 table 5000", "TYPE=Ethernet IPADDR=10.0.0.1 PREFIX=24 DEFROUTE=no NAME=3_Internal-Workstations DEVICE=enp8s0 ONBOOT=yes ZONE=internal", "pri 5 from 10.0.0.0/24 table 5000", "10.0.0.0/24 via 192.0.2.1 table 5000", "TYPE=Ethernet IPADDR=203.0.113.1 PREFIX=24 DEFROUTE=no NAME=4_Servers DEVICE=enp9s0 ONBOOT=yes ZONE=internal", "systemctl restart network", "yum install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms", "yum install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms", "ip rule list 0: from all lookup local 5 : from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default", "ip route list table 5000 default via 192.0.2.2 dev enp1s0 10.0.0.0/24 via 192.0.2.1 dev enp1s0", "firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 internal interfaces: enp8s0 enp9s0", "firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes", "nmcli connection add type vrf ifname vrf0 con-name vrf0 table 1001 ipv4.method disabled ipv6.method disabled", "nmcli connection up vrf0", "nmcli connection add type ethernet con-name vrf.enp1s0 ifname enp1s0 master vrf0 ipv4.method manual ipv4.address 192.0.2.1/24", "nmcli connection up vrf.enp1s0", "nmcli connection add type vrf ifname vrf1 con-name vrf1 table 1002 ipv4.method disabled ipv6.method disabled", "nmcli connection up vrf1", "nmcli connection add type ethernet con-name vrf.enp7s0 ifname enp7s0 master vrf1 ipv4.method manual ipv4.address 192.0.2.1/24", "nmcli connection up vrf.enp7s0", "ip link add dev blue type vrf table 1001", "ip link set dev blue up", "ip link set dev enp1s0 master blue", "ip link set dev enp1s0 up", "ip addr add dev enp1s0 192.0.2.1/24", "ip link add dev red type vrf table 1002", "ip link set dev red up", "ip link set dev enp7s0 master red", "ip link set dev enp7s0 up", "ip addr add dev enp7s0 192.0.2.1/24", "nmcli connection add type vrf ifname vrf0 con-name vrf0 table 1000 ipv4.method disabled ipv6.method disabled", "nmcli connection add type ethernet con-name enp1s0 ifname enp1s0 master vrf0 ipv4.method manual ipv4.address 192.0.2.1/24 ipv4.gateway 192.0.2.254", "nmcli connection modify enp1s0 +ipv4.routes \" 198.51.100.0/24 192.0.2.2 \"", "nmcli connection up enp1s0", "ip -br addr show vrf vrf0 enp1s0 UP 192.0.2.1/24", "ip vrf show Name Table ----------------------- vrf0 1000", "ip route show default via 203.0.113.0/24 dev enp7s0 proto static metric 100", "ip route show table 1000 default via 192.0.2.254 dev enp1s0 proto static metric 101 broadcast 192.0.2.0 dev enp1s0 proto kernel scope link src 192.0.2.1 192.0.2.0 /24 dev enp1s0 proto kernel scope link src 192.0.2.1 metric 101 local 192.0.2.1 dev enp1s0 proto kernel scope host src 192.0.2.1 broadcast 192.0.2.255 dev enp1s0 proto kernel scope link src 192.0.2.1 198.51.100.0/24 via 192.0.2.2 dev enp1s0 proto static metric 101", "ip vrf exec vrf0 traceroute 203.0.113.1 traceroute to 203.0.113.1 ( 203.0.113.1 ), 30 hops max, 60 byte packets 1 192.0.2.254 ( 192.0.2.254 ) 0.516 ms 0.459 ms 0.430 ms", "systemctl cat httpd [Service] ExecStart=/usr/sbin/httpd USDOPTIONS -DFOREGROUND", "mkdir /etc/systemd/system/httpd.service.d/", "[Service] ExecStart= ExecStart=/usr/sbin/ip vrf exec vrf0 /usr/sbin/httpd USDOPTIONS -DFOREGROUND", "systemctl daemon-reload", "systemctl restart httpd", "pidof -c httpd 1904", "ip vrf identify 1904 vrf0", "ip vrf pids vrf0 1904 httpd", "nmcli con modify enp1s0 ethtool.feature-rx on ethtool.feature-tx off", "nmcli con modify enp1s0 ethtool.feature-tx \"\"", "nmcli connection up enp1s0", "ethtool -k network_device", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_enp1s0\": { \"active\": true, \"device\": \"enp1s0\", \"features\": { \"rx_gro_hw\": \"off, \"tx_gso_list\": \"on, \"tx_sctp_segmentation\": \"off\", }", "nmcli connection modify enp1s0 ethtool.coalesce-rx-frames 128", "nmcli connection modify enp1s0 ethtool.coalesce-rx-frames \"\"", "nmcli connection up enp1s0", "ethtool -c network_device", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ethtool -c enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> rx-frames: 128 tx-frames: 128", "ethtool -S enp1s0 rx_queue_0_drops: 97326 rx_queue_1_drops: 63783", "ethtool -g enp1s0 Ring parameters for enp1s0 : Pre-set maximums: RX: 4096 RX Mini: 0 RX Jumbo: 16320 TX: 4096 Current hardware settings: RX: 255 RX Mini: 0 RX Jumbo: 0 TX: 255", "nmcli connection show NAME UUID TYPE DEVICE Example-Connection a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0", "nmcli connection modify Example-Connection ethtool.ring-rx 4096", "nmcli connection modify Example-Connection ethtool.ring-tx 4096", "nmcli connection up Example-Connection", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ethtool -g enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> Current hardware settings: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096", "RateLimitBurst=0", "systemctl restart systemd-journald", "[logging] domains=ALL:TRACE", "systemctl restart NetworkManager", "journalctl -u NetworkManager Jun 30 15:24:32 server NetworkManager[164187]: <debug> [1656595472.4939] active-connection[0x5565143c80a0]: update activation type from assume to managed Jun 30 15:24:32 server NetworkManager[164187]: <trace> [1656595472.4939] device[55b33c3bdb72840c] (enp1s0): sys-iface-state: assume -> managed Jun 30 15:24:32 server NetworkManager[164187]: <trace> [1656595472.4939] l3cfg[4281fdf43e356454,ifindex=3]: commit type register (type \"update\", source \"device\", existing a369f23014b9ede3) -> a369f23014b9ede3 Jun 30 15:24:32 server NetworkManager[164187]: <info> [1656595472.4940] manager: NetworkManager state is now CONNECTED_SITE", "nmcli general logging LEVEL DOMAINS INFO PLATFORM,RFKILL,ETHER,WIFI,BT,MB,DHCP4,DHCP6,PPP,WIFI_SCAN,IP4,IP6,A UTOIP4,DNS,VPN,SHARING,SUPPLICANT,AGENTS,SETTINGS,SUSPEND,CORE,DEVICE,OLPC, WIMAX,INFINIBAND,FIREWALL,ADSL,BOND,VLAN,BRIDGE,DBUS_PROPS,TEAM,CONCHECK,DC B,DISPATCH", "nmcli general logging level LEVEL domains ALL", "nmcli general logging level LEVEL domains DOMAINS", "nmcli general logging level KEEP domains DOMAIN:LEVEL , DOMAIN:LEVEL", "journalctl -u NetworkManager -b", "interfaces: - name: enp1s0 type: ethernet lldp: enabled: true", "nmstatectl apply ~/enable-LLDP-enp1s0.yml", "nmstatectl show enp1s0 - name: enp1s0 type: ethernet state: up ipv4: enabled: false dhcp: false ipv6: enabled: false autoconf: false dhcp: false lldp: enabled: true neighbors: - - type: 5 system-name: Summit300-48 - type: 6 system-description: Summit300-48 - Version 7.4e.1 (Build 5) 05/27/05 04:53:11 - type: 7 system-capabilities: - MAC Bridge component - Router - type: 1 _description: MAC address chassis-id: 00:01:30:F9:AD:A0 chassis-id-type: 4 - type: 2 _description: Interface name port-id: 1/1 port-id-type: 5 - type: 127 ieee-802-1-vlans: - name: v2-0488-03-0505 vid: 488 oui: 00:80:c2 subtype: 3 - type: 127 ieee-802-3-mac-phy-conf: autoneg: true operational-mau-type: 16 pmd-autoneg-cap: 27648 oui: 00:12:0f subtype: 1 - type: 127 ieee-802-1-ppvids: - 0 oui: 00:80:c2 subtype: 2 - type: 8 management-addresses: - address: 00:01:30:F9:AD:A0 address-subtype: MAC interface-number: 1001 interface-number-subtype: 2 - type: 127 ieee-802-3-max-frame-size: 1522 oui: 00:12:0f subtype: 4 mac-address: 82:75:BE:6F:8C:7A mtu: 1500", "- type: 127 ieee-802-1-vlans: - name: v2-0488-03-0505 vid: 488", "tc qdisc show dev enp0s1", "tc -s qdisc show dev enp0s1 qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 1008193 bytes 5559 pkt (dropped 233, overlimits 55 requeues 77) backlog 0b 0p requeues 0", "sysctl -a | grep qdisc net.core.default_qdisc = fq_codel", "tc -s qdisc show dev enp0s1 qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0", "sysctl -w net.core.default_qdisc=pfifo_fast", "modprobe -r NETWORKDRIVERNAME modprobe NETWORKDRIVERNAME", "ip link set enp0s1 up", "tc -s qdisc show dev enp0s1 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 373186 bytes 5333 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 .", "tc -s qdisc show dev enp0s1", "tc qdisc replace dev enp0s1 root htb", "tc -s qdisc show dev enp0s1 qdisc htb 8001: root refcnt 2 r2q 10 default 0 direct_packets_stat 0 direct_qlen 1000 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0", "tc qdisc show dev enp0s1 qdisc fq_codel 0: root refcnt 2", "nmcli connection modify enp0s1 tc.qdiscs 'root pfifo_fast'", "nmcli connection modify enp0s1 +tc.qdisc 'ingress handle ffff:'", "nmcli connection up enp0s1", "tc qdisc show dev enp0s1 qdisc pfifo_fast 8001: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc ingress ffff: parent ffff:fff1 ----------------", "nmcli connection modify enp1s0 802-1x.eap tls 802-1x.client-cert /etc/pki/tls/certs/client.crt 802-1x.private-key /etc/pki/tls/certs/certs/client.key", "nmcli connection modify enp1s0 802-1x.ca-cert /etc/pki/tls/certs/ca.crt", "nmcli connection modify enp1s0 802-1x.identity [email protected]", "nmcli connection modify enp1s0 802-1x.private-key-password password", "nmcli connection up enp1s0", "--- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false 802.1x: ca-cert: /etc/pki/tls/certs/ca.crt client-cert: /etc/pki/tls/certs/client.crt eap-methods: - tls identity: client.example.org private-key: /etc/pki/tls/private/client.key private-key-password: password routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: enp1s0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: enp1s0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-ethernet-profile.yml", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Configure an Ethernet connection with 802.1X authentication hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.key\" dest: \"/etc/pki/tls/private/client.key\" mode: 0600 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.crt\" dest: \"/etc/pki/tls/certs/client.crt\" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/ca.crt\" dest: \"/etc/pki/ca-trust/source/anchors/ca.crt\" - name: Ethernet connection profile with static IP address settings and 802.1X ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com ieee802_1x: identity: <user_name> eap: tls private_key: \"/etc/pki/tls/private/client.key\" private_key_password: \"{{ pwd }}\" client_cert: \"/etc/pki/tls/certs/client.crt\" ca_cert: \"/etc/pki/ca-trust/source/anchors/ca.crt\" domain_suffix_match: example.com state: up", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "nmcli connection add type bridge con-name br0 ifname br0", "nmcli connection add type ethernet slave-type bridge con-name br0-port1 ifname enp1s0 master br0 nmcli connection add type ethernet slave-type bridge con-name br0-port2 ifname enp7s0 master br0 nmcli connection add type ethernet slave-type bridge con-name br0-port3 ifname enp8s0 master br0 nmcli connection add type ethernet slave-type bridge con-name br0-port4 ifname enp9s0 master br0", "nmcli connection modify br0 group-forward-mask 8", "nmcli connection modify br0 connection.autoconnect-slaves 1", "nmcli connection up br0", "ip link show master br0 3: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff", "cat /sys/class/net/br0/bridge/group_fwd_mask 0x8", "ipa-getcert request -w -k /etc/pki/tls/private/radius.key -f /etc/pki/tls/certs/radius.pem -o \"root:radiusd\" -m 640 -O \"root:radiusd\" -M 640 -T caIPAserviceCert -C 'systemctl restart radiusd.service' -N freeradius.idm.example.com -D freeradius.idm.example.com -K radius/ freeradius.idm.example.com", "ipa-getcert list -f /etc/pki/tls/certs/radius.pem Number of certificates and requests being tracked: 1. Request ID '20240918142211': status: MONITORING stuck: no key pair storage: type=FILE,location='/etc/pki/tls/private/radius.key' certificate: type=FILE,location='/etc/pki/tls/certs/radius.crt'", "openssl dhparam -out /etc/raddb/certs/dh 2048", "eap { tls-config tls-common { private_key_file = /etc/pki/tls/private/radius.key certificate_file = /etc/pki/tls/certs/radius.pem ca_file = /etc/ipa/ca.crt } }", "eap { default_eap_type = ttls }", "eap { # md5 { # } }", "authenticate { # Auth-Type PAP { # pap # } # Auth-Type CHAP { # chap # } # Auth-Type MS-CHAP { # mschap # } # mschap # digest }", "authorize { #-ldap ldap if ((ok || updated) && User-Password) { update { control:Auth-Type := ldap } } }", "authenticate { Auth-Type LDAP { ldap } }", "ln -s /etc/raddb/mods-available/ldap /etc/raddb/mods-enabled/ldap", "ldap { server = 'ldaps:// idm_server.idm.example.com ' base_dn = 'cn=users,cn=accounts, dc=idm,dc=example,dc=com ' }", "tls { require_cert = 'demand' }", "client localhost { ipaddr = 127.0.0.1 secret = localhost_client_password } client localhost_ipv6 { ipv6addr = ::1 secret = localhost_client_password }", "client hostapd.example.org { ipaddr = 192.0.2.2/32 secret = hostapd_client_password }", "client <hostname_or_description> { ipaddr = <IP_address_or_range> secret = <client_password> }", "radiusd -XC Configuration appears to be OK", "firewall-cmd --permanent --add-service=radius firewall-cmd --reload", "systemctl enable --now radiusd", "host -v idm_server.idm.example.com", "systemctl stop radiusd", "radiusd -X Ready to process requests", "General settings of hostapd =========================== Control interface settings ctrl_interface= /var/run/hostapd ctrl_interface_group= wheel Enable logging for all modules logger_syslog= -1 logger_stdout= -1 Log level logger_syslog_level= 2 logger_stdout_level= 2 Wired 802.1X authentication =========================== Driver interface type driver=wired Enable IEEE 802.1X authorization ieee8021x=1 Use port access entry (PAE) group address (01:80:c2:00:00:03) when sending EAPOL frames use_pae_group_addr=1 Network interface for authentication requests interface= br0 RADIUS client configuration =========================== Local IP address used as NAS-IP-Address own_ip_addr= 192.0.2.2 Unique NAS-Identifier within scope of RADIUS server nas_identifier= hostapd.example.org RADIUS authentication server auth_server_addr= 192.0.2.1 auth_server_port= 1812 auth_server_shared_secret= hostapd_client_password RADIUS accounting server acct_server_addr= 192.0.2.1 acct_server_port= 1813 acct_server_shared_secret= hostapd_client_password", "systemctl enable --now hostapd", "ip link show br0", "systemctl stop hostapd", "hostapd -d /etc/hostapd/hostapd.conf", "ipa user-add --first \" Test \" --last \" User \" idm_user --password", "ap_scan=0 network={ eap=TTLS eapol_flags=0 key_mgmt=IEEE8021X # Anonymous identity (sent in unencrypted phase 1) # Can be any string anonymous_identity=\" anonymous \" # Inner authentication (sent in TLS-encrypted phase 2) phase2=\"auth= PAP \" identity=\" idm_user \" password=\" idm_user_password \" # CA certificate to validate the RADIUS server's identity ca_cert=\" /etc/ipa/ca.crt \" }", "eapol_test -c /etc/wpa_supplicant/wpa_supplicant-TTLS.conf -a 192.0.2.1 -s <client_password> EAP: Status notification: remote certificate verification (param=success) CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully SUCCESS", "wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant-TTLS.conf -D wired -i enp0s31f6 enp0s31f6: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully", "#!/bin/sh TABLE=\"tr-mgmt-USD{1}\" read -r -d '' TABLE_DEF << EOF table bridge USD{TABLE} { set allowed_macs { type ether_addr } chain accesscontrol { ether saddr @allowed_macs accept ether daddr @allowed_macs accept drop } chain forward { type filter hook forward priority 0; policy accept; meta ibrname \"br0\" jump accesscontrol } } EOF case USD{2:-NOTANEVENT} in block_all) nft destroy table bridge \"USDTABLE\" printf \"USDTABLE_DEF\" | nft -f - echo \"USD1: All the bridge traffic blocked. Traffic for a client with a given MAC will be allowed after 802.1x authentication\" ;; AP-STA-CONNECTED | CTRL-EVENT-EAP-SUCCESS | CTRL-EVENT-EAP-SUCCESS2) nft add element bridge tr-mgmt-br0 allowed_macs { USD3 } echo \"USD1: Allowed traffic from USD3\" ;; AP-STA-DISCONNECTED | CTRL-EVENT-EAP-FAILURE) nft delete element bridge tr-mgmt-br0 allowed_macs { USD3 } echo \"USD1: Denied traffic from USD3\" ;; allow_all) nft destroy table bridge \"USDTABLE\" echo \"USD1: Allowed all bridge traffice again\" ;; NOTANEVENT) echo \"USD0 was called incorrectly, usage: USD0 interface event [mac_address]\" ;; esac", "[Unit] Description=Example 802.1x traffic management for hostapd After=hostapd.service After=sys-devices-virtual-net-%i.device [Service] Type=simple ExecStartPre=bash -c '/usr/sbin/hostapd_cli ping | grep PONG' ExecStartPre=/usr/local/bin/802-1x-tr-mgmt %i block_all ExecStart=/usr/sbin/hostapd_cli -i %i -a /usr/local/bin/802-1x-tr-mgmt ExecStopPost=/usr/local/bin/802-1x-tr-mgmt %i allow_all [Install] WantedBy=multi-user.target", "systemctl daemon-reload", "systemctl enable --now [email protected]", "echo \"net.mptcp.enabled=1\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf", "sysctl -a | grep mptcp.enabled net.mptcp.enabled = 1", "#!/usr/bin/env stap %{ #include <linux/in.h> #include <linux/ip.h> %} /* RSI contains 'type' and RDX contains 'protocol'. */ function mptcpify () %{ if (CONTEXT->kregs->si == SOCK_STREAM && (CONTEXT->kregs->dx == IPPROTO_TCP || CONTEXT->kregs->dx == 0)) { CONTEXT->kregs->dx = IPPROTO_MPTCP; STAP_RETVALUE = 1; } else { STAP_RETVALUE = 0; } %} probe kernel.function(\"__sys_socket\") { if (mptcpify() == 1) { printf(\"command %16s mptcpified\\n\", execname()); } }", "stap -vg mptcp-app.stap", "#!/usr/bin/env stap %{ #include <linux/in.h> #include <linux/ip.h> %} /* according to [1], RSI contains 'type' and RDX * contains 'protocol'. * [1] https://github.com/torvalds/linux/blob/master/arch/x86/entry/entry_64.S#L79 */ function mptcpify () %{ if (CONTEXT->kregs->si == SOCK_STREAM && (CONTEXT->kregs->dx == IPPROTO_TCP || CONTEXT->kregs->dx == 0)) { CONTEXT->kregs->dx = IPPROTO_MPTCP; STAP_RETVALUE = 1; } else { STAP_RETVALUE = 0; } %} probe kernel.function(\"__sys_socket\") { cur_proc = execname() if ((cur_proc == @1) && (mptcpify() == 1)) { printf(\"command %16s mptcpified\\n\", cur_proc); } }", "stap -vg mptcp-app.stap iperf3", "dmesg [ 1752.694072] Kprobes globally unoptimized [ 1752.730147] stap_1ade3b3356f3e68765322e26dec00c3d_1476: module_layout: kernel tainted. [ 1752.732162] Disabling lock debugging due to kernel taint [ 1752.733468] stap_1ade3b3356f3e68765322e26dec00c3d_1476: loading out-of-tree module taints kernel. [ 1752.737219] stap_1ade3b3356f3e68765322e26dec00c3d_1476: module verification failed: signature and/or required key missing - tainting kernel [ 1752.737219] stap_1ade3b3356f3e68765322e26dec00c3d_1476 (mptcp-app.stap): systemtap: 4.5/0.185, base: ffffffffc0550000, memory: 224data/32text/57ctx/65638net/367alloc kb, probes: 1", "iperf3 -s Server listening on 5201", "iperf3 -c 127.0.0.1 -t 3", "ss -nti '( dport :5201 )' State Recv-Q Send-Q Local Address:Port Peer Address:Port Process ESTAB 0 0 127.0.0.1:41842 127.0.0.1:5201 cubic wscale:7,7 rto:205 rtt:4.455/8.878 ato:40 mss:21888 pmtu:65535 rcvmss:536 advmss:65483 cwnd:10 bytes_sent:141 bytes_acked:142 bytes_received:4 segs_out:8 segs_in:7 data_segs_out:3 data_segs_in:3 send 393050505bps lastsnd:2813 lastrcv:2772 lastack:2772 pacing_rate 785946640bps delivery_rate 10944000000bps delivered:4 busy:41ms rcv_space:43690 rcv_ssthresh:43690 minrtt:0.008 tcp-ulp-mptcp flags:Mmec token:0000(id:0)/2ff053ec(id:0) seq:3e2cbea12d7673d4 sfseq:3 ssnoff:ad3d00f4 maplen:2", "nstat MPTcp * #kernel MPTcpExtMPCapableSYNRX 2 0.0 MPTcpExtMPCapableSYNTX 2 0.0 MPTcpExtMPCapableSYNACKRX 2 0.0 MPTcpExtMPCapableACKRX 2 0.0", "ip mptcp limits set add_addr_accepted 1", "ip mptcp endpoint add 198.51.100.1 dev enp1s0 signal", "iperf3 -s Server listening on 5201", "iperf3 -c 192.0.2.1 -t 3", "ss -nti '( sport :5201 )'", "ip mptcp limit show", "ip mptcp endpoint show", "nstat MPTcp * #kernel MPTcpExtMPCapableSYNRX 2 0.0 MPTcpExtMPCapableACKRX 2 0.0 MPTcpExtMPJoinSynRx 2 0.0 MPTcpExtMPJoinAckRx 2 0.0 MPTcpExtEchoAdd 2 0.0", "echo \"net.mptcp.enabled=1\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf", "[Unit] Description=Set MPTCP subflow limit to 3 After=network.target [Service] ExecStart=ip mptcp limits set subflows 3 Type=oneshot [Install] WantedBy=multi-user.target", "systemctl enable --now set_mptcp_limit", "nmcli connection modify <profile_name> connection.mptcp-flags signal,subflow,also-without-default-route", "sysctl net.mptcp.enabled net.mptcp.enabled = 1", "ip mptcp limit show add_addr_accepted 2 subflows 3", "ip mptcp endpoint show 192.0.2.1 id 1 subflow dev enp4s0 198.51.100.1 id 2 subflow dev enp1s0 192.0.2.3 id 3 subflow dev enp7s0 192.0.2.4 id 4 subflow dev enp3s0", "ip mptcp limits set add_addr_accepted 0 subflows 1", "nc -l -k -p 12345", "ip -4 route 192.0.2.0/24 dev enp1s0 proto kernel scope link src 192.0.2.2 metric 100 192.0.2.0/24 dev wlp1s0 proto kernel scope link src 192.0.2.3 metric 600", "ip mptcp monitor", "nc 192.0.2.1 12345", "[ CREATED] token=63c070d2 remid=0 locid=0 saddr4=192.0.2.2 daddr4=192.0.2.1 sport=36444 dport=12345", "[ ESTABLISHED] token=63c070d2 remid=0 locid=0 saddr4=192.0.2.2 daddr4=192.0.2.1 sport=36444 dport=12345", "ss -taunp | grep \":12345\" tcp ESTAB 0 0 192.0.2.2:36444 192.0.2.1:12345", "ip mptcp endpoint add dev wlp1s0 192.0.2.3 subflow", "[SF_ESTABLISHED] token=63c070d2 remid=0 locid=2 saddr4=192.0.2.3 daddr4=192.0.2.1 sport=53345 dport=12345 backup=0 ifindex=3", "ss -taunp | grep \":12345\" tcp ESTAB 0 0 192.0.2.2:36444 192.0.2.1:12345 tcp ESTAB 0 0 192.0.2.3%wlp1s0:53345 192.0.2.1:12345", "ip mptcp endpoint delete id 2", "[ SF_CLOSED] token=63c070d2 remid=0 locid=2 saddr4=192.0.2.3 daddr4=192.0.2.1 sport=53345 dport=12345 backup=0 ifindex=3", "[ CLOSED] token=63c070d2", "echo \"net.mptcp.enabled=0\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf", "sysctl -a | grep mptcp.enabled net.mptcp.enabled = 0", "yum install network-scripts", "DEVICE=enp1s0 BOOTPROTO=none ONBOOT=yes PREFIX=24 IPADDR=192.0.2.1 GATEWAY=192.0.2.254", "DEVICE=enp1s0 BOOTPROTO=none ONBOOT=yes IPV6INIT=yes IPV6ADDR=2001:db8:1::2/64", "DEVICE=em1 BOOTPROTO=dhcp ONBOOT=yes", "DHCP_HOSTNAME= hostname", "DHCP_FQDN= fully.qualified.domain.name", "PEERDNS=no DNS1= ip-address DNS2= ip-address", "USERS=\" username1 username2 ... \"", "nmcli connection up connection_name", "[connection] id= example_connection uuid= 82c6272d-1ff7-4d56-9c7c-0eb27c300029 type= ethernet autoconnect= true [ipv4] method= auto [ipv6] method= auto [ethernet] mac-address= 00:53:00:8f:fa:66", "nmcli --offline connection add type ethernet con-name Example-Connection ipv4.addresses 192.0.2.1/24 ipv4.dns 192.0.2.200 ipv4.method manual > /etc/NetworkManager/system-connections/example.nmconnection", "chmod 600 /etc/NetworkManager/system-connections/example.nmconnection chown root:root /etc/NetworkManager/system-connections/example.nmconnection", "systemctl start NetworkManager.service", "nmcli connection up Example-Connection", "systemctl status NetworkManager.service ● NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2022-08-03 13:08:32 CEST; 1min 40s ago", "nmcli -f TYPE,FILENAME,NAME connection TYPE FILENAME NAME ethernet /etc/NetworkManager/system-connections/examaple.nmconnection Example-Connection ethernet /etc/sysconfig/network-scripts/ifcfg-enp1s0 enp1s0", "nmcli connection show Example-Connection connection.id: Example-Connection connection.uuid: 232290ce-5225-422a-9228-cb83b22056b4 connection.stable-id: -- connection.type: 802-3-ethernet connection.interface-name: -- connection.autoconnect: yes", "[connection] id=Example-Connection type=ethernet autoconnect=true interface-name=enp1s0 [ipv4] method=auto [ipv6] method=auto", "chown root:root /etc/NetworkManager/system-connections/example.nmconnection chmod 600 /etc/NetworkManager/system-connections/example.nmconnection", "nmcli connection reload", "nmcli -f NAME,UUID,FILENAME connection NAME UUID FILENAME Example-Connection 86da2486-068d-4d05-9ac7-957ec118afba /etc/NetworkManager/system-connections/example.nmconnection", "nmcli connection up example_connection", "nmcli connection show example_connection", "nmcli connection migrate Connection 'enp1s0' (43ed18ab-f0c4-4934-af3d-2b3333948e45) successfully migrated. Connection 'enp2s0' (883333e8-1b87-4947-8ceb-1f8812a80a9b) successfully migrated.", "nmcli -f TYPE,FILENAME,NAME connection TYPE FILENAME NAME ethernet /etc/NetworkManager/system-connections/enp1s0.nmconnection enp1s0 ethernet /etc/NetworkManager/system-connections/enp2s0.nmconnection enp2s0", "systemctl edit <service_name>", "[Unit] After=network-online.target", "systemctl daemon-reload", "import libnmstate", "import libnmstate from libnmstate.schema import Interface net_state = libnmstate.show() for iface_state in net_state[Interface.KEY]: print(iface_state[Interface.NAME] + \": \" + iface_state[Interface.STATE])", "nmstatectl show enp1s0 > ~/network-config.yml", "nmstatectl apply ~/network-config.yml", "vars: network_state: interfaces: - name: enp7s0 type: ethernet state: up ipv4: enabled: true auto-dns: true auto-gateway: true auto-routes: true dhcp: true ipv6: enabled: true auto-dns: true auto-gateway: true auto-routes: true autoconf: true dhcp: true", "vars: network_connections: - name: enp7s0 interface_name: enp7s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up", "vars: network_state: interfaces: - name: enp7s0 type: ethernet state: down", "vars: network_connections: - name: enp7s0 interface_name: enp7s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: down", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>My Zone</short> <description>Here you can describe the characteristic features of the zone.</description> <service name=\"ssh\"/> <port protocol=\"udp\" port=\"1025-65535\"/> <port protocol=\"tcp\" port=\"1025-65535\"/> </zone>", "firewall-cmd --get-zones", "firewall-cmd --add-service=ssh --zone= <your_chosen_zone> firewall-cmd --remove-service=ftp --zone= <same_chosen_zone>", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <your_chosen_zone> --change-interface=< interface_name > --permanent", "firewall-cmd --zone= <your_chosen_zone> --list-all", "firewall-cmd --get-default-zone", "firewall-cmd --set-default-zone <zone_name >", "firewall-cmd --get-active-zones", "firewall-cmd --zone= zone_name --change-interface= interface_name --permanent", "nmcli connection modify profile connection.zone zone_name", "nmcli connection up profile", "nmcli -f NAME,FILENAME connection NAME FILENAME enp1s0 /etc/NetworkManager/system-connections/enp1s0.nmconnection enp7s0 /etc/sysconfig/network-scripts/ifcfg-enp7s0", "[connection] zone=internal", "ZONE=internal", "nmcli connection reload", "nmcli connection up <profile_name>", "firewall-cmd --get-zone-of-interface enp1s0 internal", "ZONE= zone_name", "firewall-cmd --permanent --new-zone= zone-name", "firewall-cmd --reload", "firewall-cmd --get-zones --permanent", "firewall-cmd --zone= zone-name --list-all", "firewall-cmd --permanent --zone=zone-name --set-target=<default|ACCEPT|REJECT|DROP>", "firewall-cmd --list-services ssh dhcpv6-client", "firewall-cmd --get-services RH-Satellite-6 amanda-client amanda-k5-client bacula bacula-client bitcoin bitcoin-rpc bitcoin-testnet bitcoin-testnet-rpc ceph ceph-mon cfengine condor-collector ctdb dhcp dhcpv6 dhcpv6-client dns docker-registry", "firewall-cmd --add-service= <service_name>", "firewall-cmd --runtime-to-permanent", "firewall-cmd --list-all --permanent public target: default icmp-block-inversion: no interfaces: sources: services: cockpit dhcpv6-client ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:", "firewall-cmd --check-config success", "firewall-cmd --check-config Error: INVALID_PROTOCOL: 'public.xml': 'tcpx' not from {'tcp'|'udp'|'sctp'|'dccp'}", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <zone_name> --add-service=https --permanent", "firewall-cmd --reload", "firewall-cmd --zone= <zone_name> --list-all", "firewall-cmd --zone= <zone_name> --list-services", "firewall-cmd --list-ports", "firewall-cmd --remove-port=port-number/port-type", "firewall-cmd --runtime-to-permanent", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <zone_to_inspect> --list-ports", "firewall-cmd --panic-on", "firewall-cmd --panic-off", "firewall-cmd --query-panic", "firewall-cmd --add-source=<source>", "firewall-cmd --zone=zone-name --add-source=<source>", "firewall-cmd --get-zones", "firewall-cmd --zone=trusted --add-source=192.168.2.15", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=zone-name --list-sources", "firewall-cmd --zone=zone-name --remove-source=<source>", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=zone-name --remove-source-port=<port-name>/<tcp|udp|sctp|dccp>", "firewall-cmd --get-zones block dmz drop external home internal public trusted work", "firewall-cmd --zone=internal --add-source=192.0.2.0/24", "firewall-cmd --zone=internal --add-service=http", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=internal --list-all internal (active) target: default icmp-block-inversion: no interfaces: sources: 192.0.2.0/24 services: cockpit dhcpv6-client mdns samba-client ssh http", "firewall-cmd --permanent --new-policy myOutputPolicy firewall-cmd --permanent --policy myOutputPolicy --add-ingress-zone HOST firewall-cmd --permanent --policy myOutputPolicy --add-egress-zone ANY", "firewall-cmd --permanent --policy mypolicy --set-priority -500", "firewall-cmd --permanent --new-policy podmanToAny", "firewall-cmd --permanent --policy podmanToAny --set-target REJECT firewall-cmd --permanent --policy podmanToAny --add-service dhcp firewall-cmd --permanent --policy podmanToAny --add-service dns firewall-cmd --permanent --policy podmanToAny --add-service https", "firewall-cmd --permanent --new-zone=podman", "firewall-cmd --permanent --policy podmanToHost --add-ingress-zone podman", "firewall-cmd --permanent --policy podmanToHost --add-egress-zone ANY", "systemctl restart firewalld", "firewall-cmd --info-policy podmanToAny podmanToAny (active) target: REJECT ingress-zones: podman egress-zones: ANY services: dhcp dns https", "firewall-cmd --permanent --policy mypolicy --set-target CONTINUE", "firewall-cmd --info-policy mypolicy", "firewall-cmd --permanent --new-policy <example_policy>", "firewall-cmd --permanent --policy= <example_policy> --add-ingress-zone=HOST firewall-cmd --permanent --policy= <example_policy> --add-egress-zone=ANY", "firewall-cmd --permanent --policy= <example_policy> --add-rich-rule='rule family=\"ipv4\" destination address=\" 192.0.2.1 \" forward-port port=\" 443 \" protocol=\"tcp\" to-port=\" 443 \" to-addr=\" 192.51.100.20 \"'", "firewall-cmd --reload success", "echo \"net.ipv4.conf.all.route_localnet=1\" > /etc/sysctl.d/90-enable-route-localnet.conf", "sysctl -p /etc/sysctl.d/90-enable-route-localnet.conf", "curl https://192.0.2.1:443", "sysctl net.ipv4.conf.all.route_localnet net.ipv4.conf.all.route_localnet = 1", "firewall-cmd --info-policy= <example_policy> example_policy (active) priority: -1 target: CONTINUE ingress-zones: HOST egress-zones: ANY services: ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule family=\"ipv4\" destination address=\"192.0.2.1\" forward-port port=\"443\" protocol=\"tcp\" to-port=\"443\" to-addr=\"192.51.100.20\"", "firewall-cmd --zone= external --query-masquerade", "firewall-cmd --zone= external --add-masquerade", "firewall-cmd --zone= external --remove-masquerade", "net.ipv4.ip_forward=1", "sysctl -p /etc/sysctl.d/90-enable-IP-forwarding.conf", "firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toaddr=198.51.100.10:toport=8080 --permanent", "firewall-cmd --reload", "firewall-cmd --list-forward-ports --zone=public port=80:proto=tcp:toport=8080:toaddr=198.51.100.10", "cat /etc/firewalld/zones/public.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name=\"ssh\"/> <service name=\"dhcpv6-client\"/> <service name=\"cockpit\"/> <forward-port port=\"80\" protocol=\"tcp\" to-port=\"8080\" to-addr=\"198.51.100.10\"/> <forward/> </zone>", "net.ipv4.ip_forward=1", "sysctl -p /etc/sysctl.d/90-enable-IP-forwarding.conf", "firewall-cmd --zone=public --add-forward-port=port= <standard_port> :proto=tcp:toport= <non_standard_port> --permanent", "firewall-cmd --reload", "firewall-cmd --list-forward-ports port=8080:proto=tcp:toport=80:toaddr=", "cat /etc/firewalld/zones/public.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name=\"ssh\"/> <service name=\"dhcpv6-client\"/> <service name=\"cockpit\"/> <forward-port port=\"8080\" protocol=\"tcp\" to-port=\"80\"/> <forward/> </zone>", "firewall-cmd --get-icmptypes address-unreachable bad-header beyond-scope communication-prohibited destination-unreachable echo-reply echo-request failed-policy fragmentation-needed host-precedence-violation host-prohibited host-redirect host-unknown host-unreachable", "firewall-cmd --zone= <target-zone> --remove-icmp-block= echo-request --permanent", "firewall-cmd --zone= <target-zone> --add-icmp-block= redirect --permanent", "firewall-cmd --reload", "firewall-cmd --list-icmp-blocks redirect", "firewall-cmd --permanent --new-ipset= allowlist --type=hash:ip", "firewall-cmd --permanent --ipset= allowlist --add-entry= 198.51.100.10", "firewall-cmd --permanent --zone=public --add-source=ipset: allowlist", "firewall-cmd --reload", "firewall-cmd --get-ipsets allowlist", "firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: enp0s1 sources: ipset:allowlist services: cockpit dhcpv6-client ssh ports: protocols:", "cat /etc/firewalld/ipsets/allowlist.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <ipset type=\"hash:ip\"> <entry>198.51.100.10</entry> </ipset>", "firewall-cmd --add-rich-rule='rule priority=32767 log prefix=\"UNEXPECTED: \" limit value=\"5/m\"'", "nft list chain inet firewalld filter_IN_public_post table inet firewalld { chain filter_IN_public_post { log prefix \"UNEXPECTED: \" limit rate 5/minute } }", "firewall-cmd --query-lockdown", "firewall-cmd --lockdown-on", "firewall-cmd --lockdown-off", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <command name=\"/usr/bin/python3 -s /usr/bin/firewall-config\"/> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <selinux context=\"system_u:system_r:virtd_t:s0-s0:c0.c1023\"/> <user id=\"0\"/> </whitelist>", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <command name=\"/usr/libexec/platform-python -s /bin/firewall-cmd*\"/> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <user id=\"815\"/> <user name=\"user\"/> </whitelist>", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "firewall-cmd --get-active-zones", "firewall-cmd --zone=internal --change-interface= interface_name --permanent", "firewall-cmd --zone=internal --add-interface=enp1s0 --add-interface=wlp0s20", "firewall-cmd --zone=internal --add-forward", "ncat -e /usr/bin/cat -l 12345", "ncat <other_host> 12345", "--- - name: Reset firewalld example hosts: managed-node-01.example.com tasks: - name: Reset firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - previous: replaced", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-all-zones'", "--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Forward incoming traffic on port 8080 to 443 ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - forward_port: 8080/tcp;443; state: enabled runtime: true permanent: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-forward-ports' managed-node-01.example.com | CHANGED | rc=0 >> port=8080:proto=tcp:toport=443:toaddr=", "--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - zone: dmz interface: enp1s0 service: https state: enabled runtime: true permanent: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --zone=dmz --list-all' managed-node-01.example.com | CHANGED | rc=0 >> dmz (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: https ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks:", "table <table_address_family> <table_name> { }", "nft add table <table_address_family> <table_name>", "table <table_address_family> <table_name> { chain <chain_name> { type <type> hook <hook> priority <priority> policy <policy> ; } }", "nft add chain <table_address_family> <table_name> <chain_name> { type <type> hook <hook> priority <priority> \\; policy <policy> \\; }", "table <table_address_family> <table_name> { chain <chain_name> { type <type> hook <hook> priority <priority> ; policy <policy> ; <rule> } }", "nft add rule <table_address_family> <table_name> <chain_name> <rule>", "nft add table inet nftables_svc", "nft add chain inet nftables_svc INPUT { type filter hook input priority filter \\; policy accept \\; }", "nft add rule inet nftables_svc INPUT tcp dport 22 accept nft add rule inet nftables_svc INPUT tcp dport 443 accept nft add rule inet nftables_svc INPUT reject with icmpx type port-unreachable", "nft -a list table inet nftables_svc table inet nftables_svc { # handle 13 chain INPUT { # handle 1 type filter hook input priority filter ; policy accept ; tcp dport 22 accept # handle 2 tcp dport 443 accept # handle 3 reject # handle 4 } }", "nft insert rule inet nftables_svc INPUT position 3 tcp dport 636 accept", "nft add rule inet nftables_svc INPUT position 3 tcp dport 80 accept", "nft -a list table inet nftables_svc table inet nftables_svc { # handle 13 chain INPUT { # handle 1 type filter hook input priority filter ; policy accept ; tcp dport 22 accept # handle 2 tcp dport 636 accept # handle 5 tcp dport 443 accept # handle 3 tcp dport 80 accept # handle 6 reject # handle 4 } }", "nft delete rule inet nftables_svc INPUT handle 6", "nft -a list table inet nftables_svc table inet nftables_svc { # handle 13 chain INPUT { # handle 1 type filter hook input priority filter ; policy accept ; tcp dport 22 accept # handle 2 tcp dport 636 accept # handle 5 tcp dport 443 accept # handle 3 reject # handle 4 } }", "nft flush chain inet nftables_svc INPUT", "nft list table inet nftables_svc table inet nftables_svc { chain INPUT { type filter hook input priority filter; policy accept } }", "nft delete chain inet nftables_svc INPUT", "nft list table inet nftables_svc table inet nftables_svc { }", "nft delete table inet nftables_svc", "iptables-save >/root/iptables.dump ip6tables-save >/root/ip6tables.dump", "iptables-restore-translate -f /root/iptables.dump > /etc/nftables/ruleset-migrated-from-iptables.nft ip6tables-restore-translate -f /root/ip6tables.dump > /etc/nftables/ruleset-migrated-from-ip6tables.nft", "include \"/etc/nftables/ruleset-migrated-from-iptables.nft\" include \"/etc/nftables/ruleset-migrated-from-ip6tables.nft\"", "systemctl disable --now iptables", "systemctl enable --now nftables", "nft list ruleset", "iptables-translate -A INPUT -s 192.0.2.0/24 -j ACCEPT nft add rule ip filter INPUT ip saddr 192.0.2.0/24 counter accept", "iptables-translate -A INPUT -j CHECKSUM --checksum-fill nft # -A INPUT -j CHECKSUM --checksum-fill", "nft list table inet firewalld nft list table ip firewalld nft list table ip6 firewalld", "#!/usr/sbin/nft -f Flush the rule set flush ruleset table inet example_table { chain example_chain { # Chain for incoming packets that drops all packets that # are not explicitly allowed by any rule in this chain type filter hook input priority 0; policy drop; # Accept connections to port 22 (ssh) tcp dport ssh accept } }", "#!/usr/sbin/nft -f Flush the rule set flush ruleset Create a table add table inet example_table Create a chain for incoming packets that drops all packets that are not explicitly allowed by any rule in this chain add chain inet example_table example_chain { type filter hook input priority 0 ; policy drop ; } Add a rule that accepts connections to port 22 (ssh) add rule inet example_table example_chain tcp dport ssh accept", "nft -f /etc/nftables/<example_firewall_script>.nft", "#!/usr/sbin/nft -f", "chown root /etc/nftables/<example_firewall_script>.nft", "chmod u+x /etc/nftables/<example_firewall_script>.nft", "/etc/nftables/<example_firewall_script>.nft", "Flush the rule set flush ruleset add table inet example_table # Create a table", "define INET_DEV = enp1s0", "add rule inet example_table example_chain iifname USDINET_DEV tcp dport ssh accept", "define DNS_SERVERS = { 192.0.2.1 , 192.0.2.2 }", "add rule inet example_table example_chain ip daddr USDDNS_SERVERS accept", "include \"example.nft\"", "include \"/etc/nftables/rulesets/*.nft\"", "include \"/etc/nftables/_example_.nft\"", "systemctl start nftables", "systemctl enable nftables", "nft add table nat", "nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule nat postrouting oifname \" ens3 \" masquerade", "nft add table nat", "nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule nat postrouting oifname \" ens3 \" snat to 192.0.2.1", "nft add table nat", "nft -- add chain nat prerouting { type nat hook prerouting priority -100 \\; } nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule nat prerouting iifname ens3 tcp dport { 80, 443 } dnat to 192.0.2.1", "nft add rule nat postrouting oifname \"ens3\" masquerade", "nft add rule nat postrouting oifname \"ens3\" snat to 198.51.100.1", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "nft add table nat", "nft -- add chain nat prerouting { type nat hook prerouting priority -100 \\; }", "nft add rule nat prerouting tcp dport 22 redirect to 2222", "nft add table inet <example-table>", "nft add flowtable inet <example-table> <example-flowtable> { hook ingress priority filter \\; devices = { enp1s0, enp7s0 } \\; }", "nft add chain inet <example-table> <example-forwardchain> { type filter hook forward priority filter \\; }", "nft add rule inet <example-table> <example-forwardchain> ct state established flow add @ <example-flowtable>", "nft list table inet <example-table> table inet example-table { flowtable example-flowtable { hook ingress priority filter devices = { enp1s0, enp7s0 } } chain example-forwardchain { type filter hook forward priority filter; policy accept; ct state established flow add @example-flowtable } }", "nft add rule inet example_table example_chain tcp dport { 22, 80, 443 } accept", "nft list table inet example_table table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport { ssh, http, https } accept } }", "nft add set inet example_table example_set { type ipv4_addr \\; }", "nft add set inet example_table example_set { type ipv4_addr \\; flags interval \\; }", "nft add rule inet example_table example_chain ip saddr @ example_set drop", "nft add element inet example_table example_set { 192.0.2.1, 192.0.2.2 }", "nft add element inet example_table example_set { 192.0.2.0-192.0.2.255 }", "nft add table inet example_table", "nft add chain inet example_table tcp_packets", "nft add rule inet example_table tcp_packets counter", "nft add chain inet example_table udp_packets", "nft add rule inet example_table udp_packets counter", "nft add chain inet example_table incoming_traffic { type filter hook input priority 0 \\; }", "nft add rule inet example_table incoming_traffic ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets }", "nft list table inet example_table table inet example_table { chain tcp_packets { counter packets 36379 bytes 2103816 } chain udp_packets { counter packets 10 bytes 1559 } chain incoming_traffic { type filter hook input priority filter; policy accept; ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets } } }", "nft add table ip example_table", "nft add chain ip example_table example_chain { type filter hook input priority 0 \\; }", "nft add map ip example_table example_map { type ipv4_addr : verdict \\; }", "nft add rule example_table example_chain ip saddr vmap @ example_map", "nft add element ip example_table example_map { 192.0.2.1 : accept, 192.0.2.2 : drop }", "nft add element ip example_table example_map { 192.0.2.3 : accept }", "nft delete element ip example_table example_map { 192.0.2.1 }", "nft list ruleset table ip example_table { map example_map { type ipv4_addr : verdict elements = { 192.0.2.2 : drop, 192.0.2.3 : accept } } chain example_chain { type filter hook input priority filter; policy accept; ip saddr vmap @example_map } }", ":msg, startswith, \"nft drop\" -/var/log/nftables.log & stop", "systemctl restart rsyslog", "/var/log/nftables.log { size +10M maxage 30 sharedscripts postrotate /usr/bin/systemctl kill -s HUP rsyslog.service >/dev/null 2>&1 || true endscript }", "Remove all rules flush ruleset Table for both IPv4 and IPv6 rules table inet nftables_svc { # Define variables for the interface name define INET_DEV = enp1s0 define LAN_DEV = enp7s0 define DMZ_DEV = enp8s0 # Set with the IPv4 addresses of admin PCs set admin_pc_ipv4 { type ipv4_addr elements = { 10.0.0.100, 10.0.0.200 } } # Chain for incoming trafic. Default policy: drop chain INPUT { type filter hook input priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # Accept incoming traffic on loopback interface iifname lo accept # Allow request from LAN and DMZ to local DNS server iifname { USDLAN_DEV, USDDMZ_DEV } meta l4proto { tcp, udp } th dport 53 accept # Allow admins PCs to access the router using SSH iifname USDLAN_DEV ip saddr @admin_pc_ipv4 tcp dport 22 accept # Last action: Log blocked packets # (packets that were not accepted in previous rules in this chain) log prefix \"nft drop IN : \" } # Chain for outgoing traffic. Default policy: drop chain OUTPUT { type filter hook output priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # Accept outgoing traffic on loopback interface oifname lo accept # Allow local DNS server to recursively resolve queries oifname USDINET_DEV meta l4proto { tcp, udp } th dport 53 accept # Last action: Log blocked packets log prefix \"nft drop OUT: \" } # Chain for forwarding traffic. Default policy: drop chain FORWARD { type filter hook forward priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # IPv4 access from LAN and internet to the HTTPS server in the DMZ iifname { USDLAN_DEV, USDINET_DEV } oifname USDDMZ_DEV ip daddr 198.51.100.5 tcp dport 443 accept # IPv6 access from internet to the HTTPS server in the DMZ iifname USDINET_DEV oifname USDDMZ_DEV ip6 daddr 2001:db8:b::5 tcp dport 443 accept # Access from LAN and DMZ to HTTPS servers on the internet iifname { USDLAN_DEV, USDDMZ_DEV } oifname USDINET_DEV tcp dport 443 accept # Last action: Log blocked packets log prefix \"nft drop FWD: \" } # Postrouting chain to handle SNAT chain postrouting { type nat hook postrouting priority srcnat; policy accept; # SNAT for IPv4 traffic from LAN to internet iifname USDLAN_DEV oifname USDINET_DEV snat ip to 203.0.113.1 } }", "include \"/etc/nftables/firewall.nft\"", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "systemctl enable --now nftables", "nft list ruleset", "ssh router.example.com ssh: connect to host router.example.com port 22 : Network is unreachable", "journalctl -k -g \"nft drop\" Oct 14 17:27:18 router kernel: nft drop IN : IN=enp8s0 OUT= MAC=... SRC=198.51.100.5 DST=198.51.100.1 ... PROTO=TCP SPT=40464 DPT=22 ... SYN", "Oct 14 17:27:18 router kernel: nft drop IN : IN=enp8s0 OUT= MAC=... SRC=198.51.100.5 DST=198.51.100.1 ... PROTO=TCP SPT=40464 DPT=22 ... SYN", "nft add table ip nat", "nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \\; }", "nft add rule ip nat prerouting tcp dport 8022 redirect to :22", "nft add table ip nat", "nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \\; } nft add chain ip nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule ip nat prerouting tcp dport 443 dnat to 192.0.2.1", "nft add rule ip nat postrouting daddr 192.0.2.1 masquerade", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "nft add table inet filter", "nft add chain inet filter input { type filter hook input priority 0 \\; }", "nft add set inet filter limit-ssh { type ipv4_addr\\; flags dynamic \\;}", "nft add rule inet filter input tcp dport ssh ct state new add @limit-ssh { ip saddr ct count over 2 } counter reject", "nft list set inet filter limit-ssh table inet filter { set limit-ssh { type ipv4_addr size 65535 flags dynamic elements = { 192.0.2.1 ct count over 2 , 192.0.2.2 ct count over 2 } } }", "nft add table ip filter", "nft add chain ip filter input { type filter hook input priority 0 \\; }", "nft add rule ip filter input ip protocol tcp ct state new, untracked meter ratemeter { ip saddr timeout 5m limit rate over 10/minute } drop", "nft list meter ip filter ratemeter table ip filter { meter ratemeter { type ipv4_addr size 65535 flags dynamic,timeout elements = { 192.0.2.1 limit rate over 10/minute timeout 5m expires 4m58s224ms } } }", "nft add rule inet example_table example_chain tcp dport 22 counter accept", "nft list ruleset table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport ssh counter packets 6872 bytes 105448565 accept } }", "nft --handle list chain inet example_table example_chain table inet example_table { chain example_chain { # handle 1 type filter hook input priority filter; policy accept; tcp dport ssh accept # handle 4 } }", "nft replace rule inet example_table example_chain handle 4 tcp dport 22 counter accept", "nft list ruleset table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport ssh counter packets 6872 bytes 105448565 accept } }", "nft --handle list chain inet example_table example_chain table inet example_table { chain example_chain { # handle 1 type filter hook input priority filter; policy accept; tcp dport ssh accept # handle 4 } }", "nft replace rule inet example_table example_chain handle 4 tcp dport 22 meta nftrace set 1 accept", "nft monitor | grep \"inet example_table example_chain\" trace id 3c5eb15e inet example_table example_chain packet: iif \"enp1s0\" ether saddr 52:54:00:17:ff:e4 ether daddr 52:54:00:72:2f:6e ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49710 ip protocol tcp ip length 60 tcp sport 56728 tcp dport ssh tcp flags == syn tcp window 64240 trace id 3c5eb15e inet example_table example_chain rule tcp dport ssh nftrace set 1 accept (verdict accept)", "nft list ruleset > file .nft", "nft -j list ruleset > file .json", "nft -f file .nft", "nft -j -f file .json", "xdp-filter load enp1s0", "xdp-filter port 22", "xdp-filter ip 192.0.2.1 -m src", "xdp-filter ether 00:53:00:AA:07:BE -m src", "xdp-filter status", "xdp-filter load enp1s0 -p deny", "xdp-filter port 22", "xdp-filter ip 192.0.2.1", "xdp-filter ether 00:53:00:AA:07:BE", "xdp-filter status", "xdpdump -i enp1s0 -w /root/capture.pcap", "yum install bcc-tools", "ls -l /usr/share/bcc/tools/ -rwxr-xr-x. 1 root root 4198 Dec 14 17:53 dcsnoop -rwxr-xr-x. 1 root root 3931 Dec 14 17:53 dcstat -rwxr-xr-x. 1 root root 20040 Dec 14 17:53 deadlock_detector -rw-r--r--. 1 root root 7105 Dec 14 17:53 deadlock_detector.c drwxr-xr-x. 3 root root 8192 Mar 11 10:28 doc -rwxr-xr-x. 1 root root 7588 Dec 14 17:53 execsnoop -rwxr-xr-x. 1 root root 6373 Dec 14 17:53 ext4dist -rwxr-xr-x. 1 root root 10401 Dec 14 17:53 ext4slower", "/usr/share/bcc/tools/tcpaccept PID COMM IP RADDR RPORT LADDR LPORT 843 sshd 4 192.0.2.17 50598 192.0.2.1 22 1107 ns-slapd 4 198.51.100.6 38772 192.0.2.1 389 1107 ns-slapd 4 203.0.113.85 38774 192.0.2.1 389", "/usr/share/bcc/tools/tcpconnect PID COMM IP SADDR DADDR DPORT 31346 curl 4 192.0.2.1 198.51.100.16 80 31348 telnet 4 192.0.2.1 203.0.113.231 23 31361 isc-worker00 4 192.0.2.1 192.0.2.254 53", "/usr/share/bcc/tools/tcpconnlat PID COMM IP SADDR DADDR DPORT LAT(ms) 32151 isc-worker00 4 192.0.2.1 192.0.2.254 53 0.60 32155 ssh 4 192.0.2.1 203.0.113.190 22 26.34 32319 curl 4 192.0.2.1 198.51.100.59 443 188.96", "/usr/share/bcc/tools/tcpdrop TIME PID IP SADDR:SPORT > DADDR:DPORT STATE (FLAGS) 13:28:39 32253 4 192.0.2.85:51616 > 192.0.2.1:22 CLOSE_WAIT (FIN|ACK) b'tcp_drop+0x1' b'tcp_data_queue+0x2b9' 13:28:39 1 4 192.0.2.85:51616 > 192.0.2.1:22 CLOSE (ACK) b'tcp_drop+0x1' b'tcp_rcv_state_process+0xe2'", "/usr/share/bcc/tools/tcplife -L 22 PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS 19392 sshd 192.0.2.1 22 192.0.2.17 43892 53 52 6681.95 19431 sshd 192.0.2.1 22 192.0.2.245 43902 81 249381 7585.09 19487 sshd 192.0.2.1 22 192.0.2.121 43970 6998 7 16740.35", "/usr/share/bcc/tools/tcpretrans TIME PID IP LADDR:LPORT T> RADDR:RPORT STATE 00:23:02 0 4 192.0.2.1:22 R> 198.51.100.0:26788 ESTABLISHED 00:23:02 0 4 192.0.2.1:22 R> 198.51.100.0:26788 ESTABLISHED 00:45:43 0 4 192.0.2.1:22 R> 198.51.100.0:17634 ESTABLISHED", "/usr/share/bcc/tools/tcpstates SKADDR C-PID C-COMM LADDR LPORT RADDR RPORT OLDSTATE -> NEWSTATE MS ffff9cd377b3af80 0 swapper/1 0.0.0.0 22 0.0.0.0 0 LISTEN -> SYN_RECV 0.000 ffff9cd377b3af80 0 swapper/1 192.0.2.1 22 192.0.2.45 53152 SYN_RECV -> ESTABLISHED 0.067 ffff9cd377b3af80 818 sssd_nss 192.0.2.1 22 192.0.2.45 53152 ESTABLISHED -> CLOSE_WAIT 65636.773 ffff9cd377b3af80 1432 sshd 192.0.2.1 22 192.0.2.45 53152 CLOSE_WAIT -> LAST_ACK 24.409 ffff9cd377b3af80 1267 pulseaudio 192.0.2.1 22 192.0.2.45 53152 LAST_ACK -> CLOSE 0.376", "/usr/share/bcc/tools/tcpsubnet 192.0.2.0/24,198.51.100.0/24,0.0.0.0/0 Tracing... Output every 1 secs. Hit Ctrl-C to end [02/21/20 10:04:50] 192.0.2.0/24 856 198.51.100.0/24 7467 [02/21/20 10:04:51] 192.0.2.0/24 1200 198.51.100.0/24 8763 0.0.0.0/0 673", "/usr/share/bcc/tools/tcptop 13:46:29 loadavg: 0.10 0.03 0.01 1/215 3875 PID COMM LADDR RADDR RX_KB TX_KB 3853 3853 192.0.2.1:22 192.0.2.165:41838 32 102626 1285 sshd 192.0.2.1:22 192.0.2.45:39240 0 0", "/usr/share/bcc/tools/tcptracer Tracing TCP established connections. Ctrl-C to end. T PID COMM IP SADDR DADDR SPORT DPORT A 1088 ns-slapd 4 192.0.2.153 192.0.2.1 0 65535 A 845 sshd 4 192.0.2.1 192.0.2.67 22 42302 X 4502 sshd 4 192.0.2.1 192.0.2.67 22 42302", "/usr/share/bcc/tools/solisten PID COMM PROTO BACKLOG PORT ADDR 3643 nc TCPv4 1 4242 0.0.0.0 3659 nc TCPv6 1 4242 2001:db8:1::1 4221 redis-server TCPv6 128 6379 :: 4221 redis-server TCPv4 128 6379 0.0.0.0 .", "/usr/share/bcc/tools/softirqs Tracing soft irq event time... Hit Ctrl-C to end. ^C SOFTIRQ TOTAL_usecs tasklet 166 block 9152 net_rx 12829 rcu 53140 sched 182360 timer 306256", "/usr/share/bcc/tools/netqtop -n enp1s0 -i 2 Fri Jan 31 18:08:55 2023 TX QueueID avg_size [0, 64) [64, 512) [512, 2K) [2K, 16K) [16K, 64K) 0 0 0 0 0 0 0 Total 0 0 0 0 0 0 RX QueueID avg_size [0, 64) [64, 512) [512, 2K) [2K, 16K) [16K, 64K) 0 38.0 1 0 0 0 0 Total 38.0 1 0 0 0 0 ----------------------------------------------------------------------------- Fri Jan 31 18:08:57 2023 TX QueueID avg_size [0, 64) [64, 512) [512, 2K) [2K, 16K) [16K, 64K) 0 0 0 0 0 0 0 Total 0 0 0 0 0 0 RX QueueID avg_size [0, 64) [64, 512) [512, 2K) [2K, 16K) [16K, 64K) 0 38.0 1 0 0 0 0 Total 38.0 1 0 0 0 0 -----------------------------------------------------------------------------", "ip address show 1: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 98:fa:9b:a4:34:09 brd ff:ff:ff:ff:ff:ff", "ip link set enp1s0 promisc on", "ip link set enp1s0 promisc off", "ip link show enp1s0 1: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST, PROMISC ,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000 link/ether 98:fa:9b:a4:34:09 brd ff:ff:ff:ff:ff:ff", "ip address show 1: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 98:fa:9b:a4:34:09 brd ff:ff:ff:ff:ff:ff", "nmcli connection modify enp1s0 ethernet.accept-all-mac-addresses yes", "nmcli connection modify enp1s0 ethernet.accept-all-mac-addresses no", "nmcli connection up enp1s0", "nmcli connection show enp1s0 802-3-ethernet.accept-all-mac-addresses:1 (true)", "--- interfaces: - name: enp1s0 type: ethernet state: up accept -all-mac-address: true", "nmstatectl apply ~/enp1s0.yml", "nmstatectl show enp1s0 interfaces: - name: enp1s0 type: ethernet state: up accept-all-mac-addresses: true", "nmcli connection add type ethernet ifname enp1s0 con-name enp1s0 autoconnect no", "nmcli connection modify enp1s0 +tc.qdisc \"root prio handle 10:\"", "nmcli connection modify enp1s0 +tc.qdisc \"ingress handle ffff:\"", "nmcli connection modify enp1s0 +tc.tfilter \"parent ffff: matchall action mirred egress mirror dev enp7s0 \" nmcli connection modify enp1s0 +tc.tfilter \"parent 10: matchall action mirred egress mirror dev enp7s0 \"", "nmcli connection up enp1s0", "yum install tcpdump", "tcpdump -i enp7s0", "interfaces: - name: enp1s0 type: ethernet lldp: enabled: true - name: enp2s0 type: ethernet lldp: enabled: true - name: enp3s0 type: ethernet lldp: enabled: true", "nmstatectl apply ~/enable-lldp.yml", "nmstate-autoconf -d enp1s0 , enp2s0 , enp3s0 --- interfaces: - name: prod-net type: vlan state: up vlan: base-iface: bond100 id: 100 - name: mgmt-net type: vlan state: up vlan: base-iface: enp3s0 id: 200 - name: bond100 type: bond state: up link-aggregation: mode: balance-rr port: - enp1s0 - enp2s0", "nmstate-autoconf enp1s0 , enp2s0 , enp3s0", "nmstatectl show <interface_name>", "nmcli connection show Example-connection 802-3-ethernet.speed: 0 802-3-ethernet.duplex: -- 802-3-ethernet.auto-negotiate: no", "nmcli connection modify Example-connection 802-3-ethernet.auto-negotiate yes 802-3-ethernet.speed 10000 802-3-ethernet.duplex full", "nmcli connection up Example-connection", "ethtool enp1s0 Settings for enp1s0: Speed: 10000 Mb/s Duplex: Full Auto-negotiation: on Link detected: yes", "yum install dpdk", "tipc", "systemctl start systemd-modules-load", "lsmod | grep tipc tipc 311296 0", "tipc node set identity host_name", "tipc bearer enable media eth device enp1s0", "tipc link list broadcast-link: up 5254006b74be:enp1s0-525400df55d1:enp1s0: up", "tipc nametable show Type Lower Upper Scope Port Node 0 1795222054 1795222054 cluster 0 5254006b74be 0 3741353223 3741353223 cluster 0 525400df55d1 1 1 1 node 2399405586 5254006b74be 2 3741353223 3741353223 node 0 5254006b74be", "yum install NetworkManager-cloud-setup", "systemctl edit nm-cloud-setup.service", "[Service] Environment=NM_CLOUD_SETUP_EC2=yes", "systemctl daemon-reload", "systemctl enable --now nm-cloud-setup.service", "systemctl enable --now nm-cloud-setup.timer" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/configuring_and_managing_networking/index
Chapter 1. Installing Fuse on JBoss EAP
Chapter 1. Installing Fuse on JBoss EAP To develop Fuse applications on JBoss EAP, install JBoss EAP, and then install Fuse. Prerequisites Red Hat Customer Portal login credentials. A supported version of a Java runtime is installed. Red Hat Fuse Supported Runtimes lists the Java runtimes that are supported on different operating systems. Procedure Install JBoss EAP 7.4.16: Go to the Enterprise Application Platform Software Downloads page on the Red Hat Customer Portal. When prompted, log in to your customer account. In the Version dropdown menu, select 7.4 . Click the Download link for the Red Hat JBoss Enterprise Application Platform 7.4 Installer package. Run the downloaded installer. In the following example, replace DOWNLOAD_LOCATION with the location of the JBoss EAP installer on your system: Accept the terms and conditions. Choose your preferred installation path, which is represented by EAP_HOME for the JBoss EAP runtime. Create an administrative user and be sure to note this administrative user's credentials. You need them to log in to the Fuse Management Console. Accept the default settings on the remaining screens. Check the Red Hat Fuse Supported Configurations page for any notes or advice on the compatibility of JBoss EAP patches with Red Hat Fuse. If relevant, install any additional JBoss EAP patches. Install Fuse 7.13 on JBoss EAP 7.4.16: Go to the Red Hat Fuse Software Downloads page on the Red Hat Customer Portal. When prompted, log in to your customer account. In the Version dropdown menu, select 7.13.0 . Click the Download link for the Red Hat Fuse 7.13.0 on EAP Installer package. Open a shell prompt (or a command prompt on Windows). Change to the EAP_HOME directory, which is the root directory of the fresh Red Hat JBoss Enterprise Application Platform installation. Run the downloaded installer. In the following example, replace DOWNLOAD_LOCATION with the location of the downloaded Fuse installer on your system: The installer runs without prompts and logs its activity to the screen. steps Start JBoss EAP, verify that Fuse is running, and then add Fuse users to JBoss EAP. The chapters explain how to perform these tasks. Optional but recommended: set up Maven for local use with Fuse projects. This is explained in Chapter 7, Setting up Maven locally . Additional resources For more detailed information about installing JBoss EAP, see the JBoss EAP Installation Guide . For more detailed information about installing JBoss EAP patches, see the JBoss EAP Patching and Upgrading Guide .
[ "java -jar DOWNLOAD_LOCATION/jboss-eap-7.4.0-installer.jar", "java -jar DOWNLOAD_LOCATION/fuse-eap-installer-7.13.0.fuse-7_13_0-00011-redhat-00001.jar" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/installing_on_jboss_eap/installing-fuse-on-jboss-eap
7.291. ypbind
7.291. ypbind 7.291.1. RHBA-2013:0426 - ypbind bug fix update Updated ypbind packages that fix one bug are now available for Red Hat Enterprise Linux 6. The ypbind packages provide the ypbind daemon to bind NIS clients to an NIS domain. The ypbind daemon must be running on any machines that run NIS client programs. Bug Fix BZ# 647495 Prior to this update, ypbind started too late in the boot sequence, which caused problems in some environments, where it needed to be started before netfs. This update changes the priority of the ypbind service. Now, ypbind starts as expected. All users of ypbind are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ypbind
function::kernel_string_quoted_utf32
function::kernel_string_quoted_utf32 Name function::kernel_string_quoted_utf32 - Quote given UTF-32 kernel string. Synopsis Arguments addr The kernel address to retrieve the string from Description This function combines quoting as per string_quoted and UTF-32 decoding as per kernel_string_utf32 .
[ "kernel_string_quoted_utf32:string(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-kernel-string-quoted-utf32
Chapter 6. Clustering
Chapter 6. Clustering New SNMP agent to query a Pacemaker cluster The new pcs_snmp_agent agent allows you to query a Pacemaker cluster for data by means of SNMP. This agent provides basic information about a cluster, its nodes, and its resources. For information on configuring this agent, see the pcs_snmp_agent (8) man page and the High Availability Add-On Reference. (BZ#1367808) Support for Red Hat Enterprise Linux High Availability clusters on Amazon Web Services Red Hat Enterprise Linux 7.5 supports High Availability clusters of virtual machines (VMs) on Amazon Web Services (AWS). For information on configuring a Red Hat Enterprise Linux High Availability Cluster on AWS, see https://access.redhat.com/articles/3354781 . (BZ#1451776) Support for Red Hat Enterprise Linux High Availability clusters on Microsoft Azure Red Hat Enterprise Linux 7.5 supports High Availability clusters of virtual machines (VMs) in Microsoft Azure. For information on configuring a Red Hat Enterprise Linux High Availability cluster on Microsoft Azure, see https://access.redhat.com/articles/3252491 . (BZ#1476009) Unfencing is done in resource cleanup only if relevant parameters changed Previously, in a cluster that included a fence device that supports unfencing, such as fence_scsi or fence_mpath , a general resource cleanup or a cleanup of any stonith resource would always result in unfencing, including a restart of all resources. Now, unfencing is only done if the parameters to the device that supports unfencing changed. (BZ#1427648) The pcsd port is now configurable The port on which pcsd is listening can now be changed in the pcsd configuration file, and pcs can now communicate with pcsd using a custom port. This feature is primarily for the use of pcsd inside containers. (BZ# 1415197 ) Fencing and resource agents are now supported by AWS Python libraries and a CLI client With this enhancement, Amazon Web Services Python libraries (python-boto3, python-botocore, and python-s3transfer) and a CLI client (awscli) have been added to support fencing and resource agents in high availability setups. (BZ#1512020) Fencing in HA setups is now supported by Azure Python libraries With this enhancement, Azure Python libraries (python-isodate, python-jwt, python-adal, python-msrest, python-msrestazure, and python-azure-sdk) have been added to support fencing in high availability setups. (BZ#1512021) New features added to the sbd binary. The sbd binary used as a command line tool now provides the following additional features: Easy verification of the functionality of a watchdog device Ability to query a list of available watchdog devices For information on the sbd command line tool, see the sbd (8) man page. (BZ#1462002) sbd rebased to version 1.3.1 The sbd package has been rebased to upstream version 1.3.1. This version brings the following changes: Adds commands to test and query watchdog devices Overhauls the command-line options and configuration file Properly handles off actions instead of reboot (BZ# 1499864 ) Cluster status now shows by default when a resource action is pending Pacemaker supports a record-pending option that previously defaulted to false , meaning that cluster status would only show the current status of a resource (started or stopped). Now, record-pending defaults to true , meaning that cluster status may also show when a resource is in the process of starting or stopping. (BZ#1461976) clufter rebased to version 0.77.0 The clufter packages have been upgraded to upstream version 0.77.0, which provides a number of bug fixes, new features, and user experience enhancements over the version. Among the notable updates are the following: When using clufter to translate an existing configuration with the pcs2pcscmd-needle command in the case where the corosync.conf equivalent omits the cluster_name option (which is not the case with standard `pcs`-initiated configurations), the contained pcs cluster setup invocation no longer causes cluster misconfiguration with the name of the first given node interpreted as the required cluster name specification. The same invocation will now include the --encryption 0|1 switch when available, in order to reflect the original configuration accurately. In any script-like output sequence such as that produced with the ccs2pcscmd and pcs2pcscmd families of clufter commands, the intended shell interpreter is now emitted in a valid form, so that the respective commented line can be honored by the operating system. (BZ#1381531) The clufter tool now also covers some additional recently added means of configuration as facilitated with pcs (heuristics for a quorum device, meta attributes for top-level bundle resource units) when producing the sequence of configuring pcs commands to reflect existing configurations when applicable. For information on the capabilities of clufter , see the clufter(1) man page or the output of the clufter -h command. For examples of clufter usage, see the following Red Hat Knowledgebase article: https://access.redhat.com/articles/2810031 . (BZ# 1509381 ) Support for Sybase ASE failover The Red Hat High Availability Add-On now provides support for Sybase ASE failover through the ocf:heartbeat:sybaseASE resource. To display the parameters you can configure for this resource, run the pcs resource describe ocf:heartbeat:sybaseASE command. For more information on this agent, see the ocf_heartbeat_sybaseASE (7) man page. (BZ#1436189)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/new_features_clustering
Chapter 33. Linux traffic control
Chapter 33. Linux traffic control Linux offers tools for managing and manipulating the transmission of packets. The Linux Traffic Control (TC) subsystem helps in policing, classifying, shaping, and scheduling network traffic. TC also mangles the packet content during classification by using filters and actions. The TC subsystem achieves this by using queuing disciplines ( qdisc ), a fundamental element of the TC architecture. The scheduling mechanism arranges or rearranges the packets before they enter or exit different queues. The most common scheduler is the First-In-First-Out (FIFO) scheduler. You can do the qdiscs operations temporarily using the tc utility or permanently using NetworkManager. In Red Hat Enterprise Linux, you can configure default queueing disciplines in various ways to manage the traffic on a network interface. 33.1. Overview of queuing disciplines Queuing disciplines ( qdiscs ) help with queuing up and, later, scheduling of traffic transmission by a network interface. A qdisc has two operations; enqueue requests so that a packet can be queued up for later transmission and dequeue requests so that one of the queued-up packets can be chosen for immediate transmission. Every qdisc has a 16-bit hexadecimal identification number called a handle , with an attached colon, such as 1: or abcd: . This number is called the qdisc major number. If a qdisc has classes, then the identifiers are formed as a pair of two numbers with the major number before the minor, <major>:<minor> , for example abcd:1 . The numbering scheme for the minor numbers depends on the qdisc type. Sometimes the numbering is systematic, where the first-class has the ID <major>:1 , the second one <major>:2 , and so on. Some qdiscs allow the user to set class minor numbers arbitrarily when creating the class. Classful qdiscs Different types of qdiscs exist and help in the transfer of packets to and from a networking interface. You can configure qdiscs with root, parent, or child classes. The point where children can be attached are called classes. Classes in qdisc are flexible and can always contain either multiple children classes or a single child, qdisc . There is no prohibition against a class containing a classful qdisc itself, this facilitates complex traffic control scenarios. Classful qdiscs do not store any packets themselves. Instead, they enqueue and dequeue requests down to one of their children according to criteria specific to the qdisc . Eventually, this recursive packet passing ends up where the packets are stored (or picked up from in the case of dequeuing). Classless qdiscs Some qdiscs contain no child classes and they are called classless qdiscs . Classless qdiscs require less customization compared to classful qdiscs . It is usually enough to attach them to an interface. Additional resources tc(8) and tc-actions(8) man pages on your system 33.2. Introduction to connection tracking At a firewall, the Netfilter framework filters packets from an external network. After a packet arrives, Netfilter assigns a connection tracking entry. Connection tracking is a Linux kernel networking feature for logical networks that tracks connections and identifies packet flow in those connections. This feature filters and analyzes every packet, sets up the connection tracking table to store connection status, and updates the connection status based on identified packets. For example, in the case of FTP connection, Netfilter assigns a connection tracking entry to ensure all packets of FTP connection work in the same manner. The connection tracking entry stores a Netfilter mark and tracks the connection state information in the memory table in which a new packet tuple maps with an existing entry. If the packet tuple does not map with an existing entry, the packet adds a new connection tracking entry that groups packets of the same connection. You can control and analyze traffic on the network interface. The tc traffic controller utility uses the qdisc discipline to configure the packet scheduler in the network. The qdisc kernel-configured queuing discipline enqueues packets to the interface. By using qdisc , Kernel catches all the traffic before a network interface transmits it. Also, to limit the bandwidth rate of packets belonging to the same connection, use the tc qdisc command. To retrieve data from connection tracking marks into various fields, use the tc utility with the ctinfo module and the connmark functionality. For storing packet mark information, the ctinfo module copies the Netfilter mark and the connection state information into a socket buffer ( skb ) mark metadata field. Transmitting a packet over a physical medium removes all the metadata of a packet. Before the packet loses its metadata, the ctinfo module maps and copies the Netfilter mark value to a specific value of the Diffserv code point (DSCP) in the packet's IP field. Additional resources tc(8) and tc-ctinfo(8) man pages on your system 33.3. Inspecting qdiscs of a network interface using the tc utility By default, Red Hat Enterprise Linux systems use fq_codel qdisc . You can inspect the qdisc counters using the tc utility. Procedure Optional: View your current qdisc : Inspect the current qdisc counters: dropped - the number of times a packet is dropped because all queues are full overlimits - the number of times the configured link capacity is filled sent - the number of dequeues 33.4. Updating the default qdisc If you observe networking packet losses with the current qdisc , you can change the qdisc based on your network-requirements. Procedure View the current default qdisc : View the qdisc of current Ethernet connection: Update the existing qdisc : To apply the changes, reload the network driver: Start the network interface: Verification View the qdisc of the Ethernet connection: Additional resources How to set sysctl variables on Red Hat Enterprise Linux (Red Hat Knowledgebase) 33.5. Temporarily setting the current qdisc of a network interface using the tc utility You can update the current qdisc without changing the default one. Procedure Optional: View the current qdisc : Update the current qdisc : Verification View the updated current qdisc : 33.6. Permanently setting the current qdisc of a network interface using NetworkManager You can update the current qdisc value of a NetworkManager connection. Procedure Optional: View the current qdisc : Update the current qdisc : Optional: To add another qdisc over the existing qdisc , use the +tc.qdisc option: Activate the changes: Verification View current qdisc the network interface: Additional resources nm-settings(5) man page on your system 33.7. Configuring the rate limiting of packets by using the tc-ctinfo utility You can limit network traffic and prevent the exhaustion of resources in the network by using rate limiting. With rate limiting, you can also reduce the load on servers by limiting repetitive packet requests in a specific time frame. In addition, you can manage bandwidth rate by configuring traffic control in the kernel with the tc-ctinfo utility. The connection tracking entry stores the Netfilter mark and connection information. When a router forwards a packet from the firewall, the router either removes or modifies the connection tracking entry from the packet. The connection tracking information ( ctinfo ) module retrieves data from connection tracking marks into various fields. This kernel module preserves the Netfilter mark by copying it into a socket buffer ( skb ) mark metadata field. Prerequisites The iperf3 utility is installed on a server and a client. Procedure Perform the following steps on the server: Add a virtual link to the network interface: This command has the following parameters: name ifb4eth0 Sets a new virtual device interface. numtxqueues 48 Sets the number of transmit queues. numrxqueues 48 Sets the number of receive queues. type ifb Sets the type of the new device. Change the state of the interface: Add the qdisc attribute on the physical network interface and apply it to the incoming traffic: In the handle ffff: option, the handle parameter assigns the major number ffff: as a default value to a classful qdisc on the enp1s0 physical network interface, where qdisc is a queueing discipline parameter to analyze traffic control. Add a filter on the physical interface of the ip protocol to classify packets: This command has the following attributes: parent ffff: Sets major number ffff: for the parent qdisc . u32 match u32 0 0 Sets the u32 filter to match the IP headers the of u32 pattern. The first 0 represents the second byte of IP header while the other 0 is for the mask match telling the filter which bits to match. action ctinfo Sets action to retrieve data from the connection tracking mark into various fields. cpmark 100 Copies the connection tracking mark (connmark) 100 into the packet IP header field. action mirred egress redirect dev ifb4eth0 Sets action mirred to redirect the received packets to the ifb4eth0 destination interface. Add a classful qdisc to the interface: This command sets the major number 1 to root qdisc and uses the htb hierarchy token bucket with classful qdisc of minor-id 1000 . Limit the traffic on the interface to 1 Mbit/s with an upper limit of 2 Mbit/s: This command has the following parameters: parent 1:1 Sets parent with classid as 1 and root as 1 . classid 1:100 Sets classid as 1:100 where 1 is the number of parent qdisc and 100 is the number of classes of the parent qdisc . htb ceil 2mbit The htb classful qdisc allows upper limit bandwidth of 2 Mbit/s as the ceil rate limit. Apply the Stochastic Fairness Queuing ( sfq ) of classless qdisc to interface with a time interval of 60 seconds to reduce queue algorithm perturbation: Add the firewall mark ( fw ) filter to the interface: Restore the packet meta mark from the connection mark ( CONNMARK ): In this command, the nft utility has a mangle table with the PREROUTING chain rule specification that alters incoming packets before routing to replace the packet mark with CONNMARK . If no nft table and chain exist, create a table and add a chain rule: Set the meta mark on tcp packets that are received on the specified destination address 192.0.2.3 : Save the packet mark into the connection mark: Run the iperf3 utility as the server on a system by using the -s parameter and the server then waits for the response of the client connection: On the client, run iperf3 as a client and connect to the server that listens on IP address 192.0.2.3 for periodic HTTP request-response timestamp: 192.0.2.3 is the IP address of the server while 192.0.2.4 is the IP address of the client. Terminate the iperf3 utility on the server by pressing Ctrl + C : Terminate the iperf3 utility on the client by pressing Ctrl + C : Verification Display the statistics about packet counts of the htb and sfq classes on the interface: Display the statistics of packet counts for the mirred and ctinfo actions: Display the statistics of the htb rate-limiter and its configuration: Additional resources tc(8) and tc-ctinfo(8) man page on your system nft(8) man page on your system 33.8. Available qdiscs in RHEL Each qdisc addresses unique networking-related issues. The following is the list of qdiscs available in RHEL. You can use any of the following qdisc to shape network traffic based on your networking requirements. Table 33.1. Available schedulers in RHEL qdisc name Included in Offload support Credit-Based Shaper kernel-modules-extra Yes CHOose and Keep for responsive flows, CHOose and Kill for unresponsive flows (CHOKE) kernel-modules-extra Controlled Delay (CoDel) kernel-core Enhanced Transmission Selection (ETS) kernel-modules-extra Yes Fair Queue (FQ) kernel-core Fair Queuing Controlled Delay (FQ_CODel) kernel-core Generalized Random Early Detection (GRED) kernel-modules-extra Hierarchical Fair Service Curve (HSFC) kernel-core Heavy-Hitter Filter (HHF) kernel-core Hierarchy Token Bucket (HTB) kernel-core Yes INGRESS kernel-core Yes Multi Queue Priority (MQPRIO) kernel-modules-extra Yes Multiqueue (MULTIQ) kernel-modules-extra Yes Network Emulator (NETEM) kernel-modules-extra Proportional Integral-controller Enhanced (PIE) kernel-core PLUG kernel-core Quick Fair Queueing (QFQ) kernel-modules-extra Random Early Detection (RED) kernel-modules-extra Yes Stochastic Fair Blue (SFB) kernel-modules-extra Stochastic Fairness Queueing (SFQ) kernel-core Token Bucket Filter (TBF) kernel-core Yes Trivial Link Equalizer (TEQL) kernel-modules-extra Important The qdisc offload requires hardware and driver support on NIC. Additional resources tc(8) man page on your system
[ "tc qdisc show dev enp0s1", "tc -s qdisc show dev enp0s1 qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 1008193 bytes 5559 pkt (dropped 233, overlimits 55 requeues 77) backlog 0b 0p requeues 0", "sysctl -a | grep qdisc net.core.default_qdisc = fq_codel", "tc -s qdisc show dev enp0s1 qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0", "sysctl -w net.core.default_qdisc=pfifo_fast", "modprobe -r NETWORKDRIVERNAME modprobe NETWORKDRIVERNAME", "ip link set enp0s1 up", "tc -s qdisc show dev enp0s1 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 373186 bytes 5333 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 .", "tc -s qdisc show dev enp0s1", "tc qdisc replace dev enp0s1 root htb", "tc -s qdisc show dev enp0s1 qdisc htb 8001: root refcnt 2 r2q 10 default 0 direct_packets_stat 0 direct_qlen 1000 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0", "tc qdisc show dev enp0s1 qdisc fq_codel 0: root refcnt 2", "nmcli connection modify enp0s1 tc.qdiscs 'root pfifo_fast'", "nmcli connection modify enp0s1 +tc.qdisc 'ingress handle ffff:'", "nmcli connection up enp0s1", "tc qdisc show dev enp0s1 qdisc pfifo_fast 8001: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc ingress ffff: parent ffff:fff1 ----------------", "ip link add name ifb4eth0 numtxqueues 48 numrxqueues 48 type ifb", "ip link set dev ifb4eth0 up", "tc qdisc add dev enp1s0 handle ffff: ingress", "tc filter add dev enp1s0 parent ffff: protocol ip u32 match u32 0 0 action ctinfo cpmark 100 action mirred egress redirect dev ifb4eth0", "tc qdisc add dev ifb4eth0 root handle 1: htb default 1000", "tc class add dev ifb4eth0 parent 1:1 classid 1:100 htb ceil 2mbit rate 1mbit prio 100", "tc qdisc add dev ifb4eth0 parent 1:100 sfq perturb 60", "tc filter add dev ifb4eth0 parent 1:0 protocol ip prio 100 handle 100 fw classid 1:100", "nft add rule ip mangle PREROUTING counter meta mark set ct mark", "nft add table ip mangle nft add chain ip mangle PREROUTING {type filter hook prerouting priority mangle \\;}", "nft add rule ip mangle PREROUTING ip daddr 192.0.2.3 counter meta mark set 0x64", "nft add rule ip mangle PREROUTING counter ct mark set mark", "iperf3 -s", "iperf3 -c 192.0.2.3 -t TCP_STREAM | tee rate", "Accepted connection from 192.0.2.4, port 52128 [5] local 192.0.2.3 port 5201 connected to 192.0.2.4 port 52130 [ID] Interval Transfer Bitrate [5] 0.00-1.00 sec 119 KBytes 973 Kbits/sec [5] 1.00-2.00 sec 116 KBytes 950 Kbits/sec [ID] Interval Transfer Bitrate [5] 0.00-14.81 sec 1.51 MBytes 853 Kbits/sec receiver iperf3: interrupt - the server has terminated", "Connecting to host 192.0.2.3, port 5201 [5] local 192.0.2.4 port 52130 connected to 192.0.2.3 port 5201 [ID] Interval Transfer Bitrate Retr Cwnd [5] 0.00-1.00 sec 481 KBytes 3.94 Mbits/sec 0 76.4 KBytes [5] 1.00-2.00 sec 223 KBytes 1.83 Mbits/sec 0 82.0 KBytes [ID] Interval Transfer Bitrate Retr [5] 0.00-14.00 sec 3.92 MBytes 2.35 Mbits/sec 32 sender [5] 0.00-14.00 sec 0.00 Bytes 0.00 bits/sec receiver iperf3: error - the server has terminated", "tc -s qdisc show dev ifb4eth0 qdisc htb 1: root Sent 26611455 bytes 3054 pkt (dropped 76, overlimits 4887 requeues 0) qdisc sfq 8001: parent Sent 26535030 bytes 2296 pkt (dropped 76, overlimits 0 requeues 0)", "tc -s filter show dev enp1s0 ingress filter parent ffff: protocol ip pref 49152 u32 chain 0 filter parent ffff: protocol ip pref 49152 u32 chain 0 fh 800: ht divisor 1 filter parent ffff: protocol ip pref 49152 u32 chain 0 fh 800::800 order 2048 key ht 800 bkt 0 terminal flowid not_in_hw (rule hit 8075 success 8075) match 00000000/00000000 at 0 (success 8075 ) action order 1: ctinfo zone 0 pipe index 1 ref 1 bind 1 cpmark 0x00000064 installed 3105 sec firstused 3105 sec DSCP set 0 error 0 CPMARK set 7712 Action statistics: Sent 25891504 bytes 3137 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 action order 2: mirred (Egress Redirect to device ifb4eth0) stolen index 1 ref 1 bind 1 installed 3105 sec firstused 3105 sec Action statistics: Sent 25891504 bytes 3137 pkt (dropped 0, overlimits 61 requeues 0) backlog 0b 0p requeues 0", "tc -s class show dev ifb4eth0 class htb 1:100 root leaf 8001: prio 7 rate 1Mbit ceil 2Mbit burst 1600b cburst 1600b Sent 26541716 bytes 2373 pkt (dropped 61, overlimits 4887 requeues 0) backlog 0b 0p requeues 0 lended: 7248 borrowed: 0 giants: 0 tokens: 187250 ctokens: 93625" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/linux-traffic-control_configuring-and-managing-networking
Chapter 4. Dynamically provisioned OpenShift Data Foundation deployed on Google cloud
Chapter 4. Dynamically provisioned OpenShift Data Foundation deployed on Google cloud 4.1. Replacing operational or failed storage devices on Google Cloud installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an Google Cloud installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing operational nodes on Google Cloud installer-provisioned infrastructure Replacing failed nodes on Google Cloud installer-provisioned infrastructures .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_google_cloud
7.3.2.2.5. Optimising for sustained metadata modifications
7.3.2.2.5. Optimising for sustained metadata modifications The size of the log is the main factor in determining the achievable level of sustained metadata modification. The log device is circular, so before the tail can be overwritten all the modifications in the log must be written to the real locations on disk. This can involve a significant amount of seeking to write back all dirty metadata. The default configuration scales the log size in relation to the overall file system size, so in most cases log size will not require tuning. A small log device will result in very frequent metadata writeback - the log will constantly be pushing on its tail to free up space and so frequently modified metadata will be frequently written to disk, causing operations to be slow. Increasing the log size increases the time period between tail pushing events. This allows better aggregation of dirty metadata, resulting in better metadata writeback patterns, and less writeback of frequently modified metadata. The trade-off is that larger logs require more memory to track all outstanding changes in memory. If you have a machine with limited memory, then large logs are not beneficial because memory constraints will cause metadata writeback long before the benefits of a large log can be realised. In these cases, smaller rather than larger logs will often provide better performance because metadata writeback from the log running out of space is more efficient than writeback driven by memory reclamation. You should always try to align the log to the underlying stripe unit of the device that contains the file system. mkfs does this by default for MD and DM devices, but for hardware RAID it may need to be specified. Setting this correctly avoids all possibility of log I/O causing unaligned I/O and subsequent read-modify-write operations when writing modifications to disk. Log operation can be further improved by editing mount options. Increasing the size of the in-memory log buffers ( logbsize ) increases the speed at which changes can be written to the log. The default log buffer size is MAX (32 KB, log stripe unit), and the maximum size is 256 KB. In general, a larger value results in faster performance. However, under fsync-heavy workloads, small log buffers can be noticeably faster than large buffers with a large stripe unit alignment. The delaylog mount option also improves sustained metadata modification performance by reducing the number of changes to the log. It achieves this by aggregating individual changes in memory before writing them to the log: frequently modified metadata is written to the log periodically instead of on every modification. This option increases the memory usage of tracking dirty metadata and increases the potential lost operations when a crash occurs, but can improve metadata modification speed and scalability by an order of magnitude or more. Use of this option does not reduce data or metadata integrity when fsync , fdatasync or sync are used to ensure data and metadata is written to disk.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/ch07s03s02s02s05
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/release_notes_and_known_issues/making-open-source-more-inclusive
Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1]
Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] Description ImageContentSourcePolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration 13.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description repositoryDigestMirrors array repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. repositoryDigestMirrors[] object RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. 13.1.2. .spec.repositoryDigestMirrors Description repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Type array 13.1.3. .spec.repositoryDigestMirrors[] Description RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. Type object Required source Property Type Description mirrors array (string) mirrors is one or more repositories that may also contain the same images. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. Other cluster configuration, including (but not limited to) other repositoryDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. source string source is the repository that users refer to, e.g. in image pull specifications. 13.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies DELETE : delete collection of ImageContentSourcePolicy GET : list objects of kind ImageContentSourcePolicy POST : create an ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} DELETE : delete an ImageContentSourcePolicy GET : read the specified ImageContentSourcePolicy PATCH : partially update the specified ImageContentSourcePolicy PUT : replace the specified ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status GET : read status of the specified ImageContentSourcePolicy PATCH : partially update status of the specified ImageContentSourcePolicy PUT : replace status of the specified ImageContentSourcePolicy 13.2.1. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies Table 13.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ImageContentSourcePolicy Table 13.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageContentSourcePolicy Table 13.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.5. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicyList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageContentSourcePolicy Table 13.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.7. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.8. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 202 - Accepted ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.2. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} Table 13.9. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy Table 13.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImageContentSourcePolicy Table 13.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 13.12. Body parameters Parameter Type Description body DeleteOptions schema Table 13.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageContentSourcePolicy Table 13.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 13.15. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageContentSourcePolicy Table 13.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 13.17. Body parameters Parameter Type Description body Patch schema Table 13.18. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageContentSourcePolicy Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.3. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status Table 13.22. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy Table 13.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ImageContentSourcePolicy Table 13.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 13.25. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageContentSourcePolicy Table 13.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 13.27. Body parameters Parameter Type Description body Patch schema Table 13.28. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageContentSourcePolicy Table 13.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.30. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.31. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operator_apis/imagecontentsourcepolicy-operator-openshift-io-v1alpha1
Preface
Preface As an OpenShift cluster administrator, you can manage the following Red Hat OpenShift AI resources: Users and groups The dashboard interface, including the visibility of navigation menu options Applications that show in the dashboard Custom deployment resources that are related to the Red Hat OpenShift AI Operator, for example, CPU and memory limits and requests Accelerators Distributed workloads Data backup
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_openshift_ai/pr01
Chapter 2. Basic Principles of Route Building
Chapter 2. Basic Principles of Route Building Abstract Apache Camel provides several processors and components that you can link together in a route. This chapter provides a basic orientation by explaining the principles of building a route using the provided building blocks. 2.1. Pipeline Processing Overview In Apache Camel, pipelining is the dominant paradigm for connecting nodes in a route definition. The pipeline concept is probably most familiar to users of the UNIX operating system, where it is used to join operating system commands. For example, ls | more is an example of a command that pipes a directory listing, ls , to the page-scrolling utility, more . The basic idea of a pipeline is that the output of one command is fed into the input of the . The natural analogy in the case of a route is for the Out message from one processor to be copied to the In message of the processor. Processor nodes Every node in a route, except for the initial endpoint, is a processor , in the sense that they inherit from the org.apache.camel.Processor interface. In other words, processors make up the basic building blocks of a DSL route. For example, DSL commands such as filter() , delayer() , setBody() , setHeader() , and to() all represent processors. When considering how processors connect together to build up a route, it is important to distinguish two different processing approaches. The first approach is where the processor simply modifies the exchange's In message, as shown in Figure 2.1, "Processor Modifying an In Message" . The exchange's Out message remains null in this case. Figure 2.1. Processor Modifying an In Message The following route shows a setHeader() command that modifies the current In message by adding (or modifying) the BillingSystem heading: The second approach is where the processor creates an Out message to represent the result of the processing, as shown in Figure 2.2, "Processor Creating an Out Message" . Figure 2.2. Processor Creating an Out Message The following route shows a transform() command that creates an Out message with a message body containing the string, DummyBody : where constant("DummyBody") represents a constant expression. You cannot pass the string, DummyBody , directly, because the argument to transform() must be an expression type. Pipeline for InOnly exchanges Figure 2.3, "Sample Pipeline for InOnly Exchanges" shows an example of a processor pipeline for InOnly exchanges. Processor A acts by modifying the In message, while processors B and C create an Out message. The route builder links the processors together as shown. In particular, processors B and C are linked together in the form of a pipeline : that is, processor B's Out message is moved to the In message before feeding the exchange into processor C, and processor C's Out message is moved to the In message before feeding the exchange into the producer endpoint. Thus the processors' outputs and inputs are joined into a continuous pipeline, as shown in Figure 2.3, "Sample Pipeline for InOnly Exchanges" . Figure 2.3. Sample Pipeline for InOnly Exchanges Apache Camel employs the pipeline pattern by default, so you do not need to use any special syntax to create a pipeline in your routes. For example, the following route pulls messages from a userdataQueue queue, pipes the message through a Velocity template (to produce a customer address in text format), and then sends the resulting text address to the queue, envelopeAddresses : Where the Velocity endpoint, velocity:file:AddressTemplate.vm , specifies the location of a Velocity template file, file:AddressTemplate.vm , in the file system. The to() command changes the exchange pattern to InOut before sending the exchange to the Velocity endpoint and then changes it back to InOnly afterwards. For more details of the Velocity endpoint, see Velocity in the Apache Camel Component Reference Guide . Pipeline for InOut exchanges Figure 2.4, "Sample Pipeline for InOut Exchanges" shows an example of a processor pipeline for InOut exchanges, which you typically use to support remote procedure call (RPC) semantics. Processors A, B, and C are linked together in the form of a pipeline, with the output of each processor being fed into the input of the . The final Out message produced by the producer endpoint is sent all the way back to the consumer endpoint, where it provides the reply to the original request. Figure 2.4. Sample Pipeline for InOut Exchanges Note that in order to support the InOut exchange pattern, it is essential that the last node in the route (whether it is a producer endpoint or some other kind of processor) creates an Out message. Otherwise, any client that connects to the consumer endpoint would hang and wait indefinitely for a reply message. You should be aware that not all producer endpoints create Out messages. Consider the following route that processes payment requests, by processing incoming HTTP requests: Where the incoming payment request is processed by passing it through a pipeline of Web services, cxf:bean:addAccountDetails , cxf:bean:getCreditRating , and cxf:bean:processTransaction . The final Web service, processTransaction , generates a response ( Out message) that is sent back through the JETTY endpoint. When the pipeline consists of just a sequence of endpoints, it is also possible to use the following alternative syntax: Pipeline for InOptionalOut exchanges The pipeline for InOptionalOut exchanges is essentially the same as the pipeline in Figure 2.4, "Sample Pipeline for InOut Exchanges" . The difference between InOut and InOptionalOut is that an exchange with the InOptionalOut exchange pattern is allowed to have a null Out message as a reply. That is, in the case of an InOptionalOut exchange, a null Out message is copied to the In message of the node in the pipeline. By contrast, in the case of an InOut exchange, a null Out message is discarded and the original In message from the current node would be copied to the In message of the node instead. 2.2. Multiple Inputs Overview A standard route takes its input from just a single endpoint, using the from( EndpointURL ) syntax in the Java DSL. But what if you need to define multiple inputs for your route? Apache Camel provides several alternatives for specifying multiple inputs to a route. The approach to take depends on whether you want the exchanges to be processed independently of each other or whether you want the exchanges from different inputs to be combined in some way (in which case, you should use the the section called "Content enricher pattern" ). Multiple independent inputs The simplest way to specify multiple inputs is using the multi-argument form of the from() DSL command, for example: Or you can use the following equivalent syntax: In both of these examples, exchanges from each of the input endpoints, URI1 , URI2 , and URI3 , are processed independently of each other and in separate threads. In fact, you can think of the preceding route as being equivalent to the following three separate routes: Segmented routes For example, you might want to merge incoming messages from two different messaging systems and process them using the same route. In most cases, you can deal with multiple inputs by dividing your route into segments, as shown in Figure 2.5, "Processing Multiple Inputs with Segmented Routes" . Figure 2.5. Processing Multiple Inputs with Segmented Routes The initial segments of the route take their inputs from some external queues - for example, activemq:Nyse and activemq:Nasdaq - and send the incoming exchanges to an internal endpoint, InternalUrl . The second route segment merges the incoming exchanges, taking them from the internal endpoint and sending them to the destination queue, activemq:USTxn . The InternalUrl is the URL for an endpoint that is intended only for use within a router application. The following types of endpoints are suitable for internal use: Direct endpoint SEDA endpoints VM endpoints The main purpose of these endpoints is to enable you to glue together different segments of a route. They all provide an effective way of merging multiple inputs into a single route. Direct endpoints The direct component provides the simplest mechanism for linking together routes. The event model for the direct component is synchronous , so that subsequent segments of the route run in the same thread as the first segment. The general format of a direct URL is direct: EndpointID , where the endpoint ID, EndpointID , is simply a unique alphanumeric string that identifies the endpoint instance. For example, if you want to take the input from two message queues, activemq:Nyse and activemq:Nasdaq , and merge them into a single message queue, activemq:USTxn , you can do this by defining the following set of routes: Where the first two routes take the input from the message queues, Nyse and Nasdaq , and send them to the endpoint, direct:mergeTxns . The last queue combines the inputs from the two queues and sends the combined message stream to the activemq:USTxn queue. The implementation of the direct endpoint behaves as follows: whenever an exchange arrives at a producer endpoint (for example, to("direct:mergeTxns") ), the direct endpoint passes the exchange directly to all of the consumers endpoints that have the same endpoint ID (for example, from("direct:mergeTxns") ). Direct endpoints can only be used to communicate between routes that belong to the same CamelContext in the same Java virtual machine (JVM) instance. SEDA endpoints The SEDA component provides an alternative mechanism for linking together routes. You can use it in a similar way to the direct component, but it has a different underlying event and threading model, as follows: Processing of a SEDA endpoint is not synchronous. That is, when you send an exchange to a SEDA producer endpoint, control immediately returns to the preceding processor in the route. SEDA endpoints contain a queue buffer (of java.util.concurrent.BlockingQueue type), which stores all of the incoming exchanges prior to processing by the route segment. Each SEDA consumer endpoint creates a thread pool (the default size is 5) to process exchange objects from the blocking queue. The SEDA component supports the competing consumers pattern, which guarantees that each incoming exchange is processed only once, even if there are multiple consumers attached to a specific endpoint. One of the main advantages of using a SEDA endpoint is that the routes can be more responsive, owing to the built-in consumer thread pool. The stock transactions example can be re-written to use SEDA endpoints instead of direct endpoints, as follows: The main difference between this example and the direct example is that when using SEDA, the second route segment (from seda:mergeTxns to activemq:USTxn ) is processed by a pool of five threads. Note There is more to SEDA than simply pasting together route segments. The staged event-driven architecture (SEDA) encompasses a design philosophy for building more manageable multi-threaded applications. The purpose of the SEDA component in Apache Camel is simply to enable you to apply this design philosophy to your applications. For more details about SEDA, see http://www.eecs.harvard.edu/~mdw/proj/seda/ . VM endpoints The VM component is very similar to the SEDA endpoint. The only difference is that, whereas the SEDA component is limited to linking together route segments from within the same CamelContext , the VM component enables you to link together routes from distinct Apache Camel applications, as long as they are running within the same Java virtual machine. The stock transactions example can be re-written to use VM endpoints instead of SEDA endpoints, as follows: And in a separate router application (running in the same Java VM), you can define the second segment of the route as follows: Content enricher pattern The content enricher pattern defines a fundamentally different way of dealing with multiple inputs to a route. When an exchange enters the enricher processor, the enricher contacts an external resource to retrieve information, which is then added to the original message. In this pattern, the external resource effectively represents a second input to the message. For example, suppose you are writing an application that processes credit requests. Before processing a credit request, you need to augment it with the data that assigns a credit rating to the customer, where the ratings data is stored in a file in the directory, src/data/ratings . You can combine the incoming credit request with data from the ratings file using the pollEnrich() pattern and a GroupedExchangeAggregationStrategy aggregation strategy, as follows: Where the GroupedExchangeAggregationStrategy class is a standard aggregation strategy from the org.apache.camel.processor.aggregate package that adds each new exchange to a java.util.List instance and stores the resulting list in the Exchange.GROUPED_EXCHANGE exchange property. In this case, the list contains two elements: the original exchange (from the creditRequests JMS queue); and the enricher exchange (from the file endpoint). To access the grouped exchange, you can use code like the following: An alternative approach to this application would be to put the merge code directly into the implementation of the custom aggregation strategy class. For more details about the content enricher pattern, see Section 10.1, "Content Enricher" . 2.3. Exception Handling Abstract Apache Camel provides several different mechanisms, which let you handle exceptions at different levels of granularity: you can handle exceptions within a route using doTry , doCatch , and doFinally ; or you can specify what action to take for each exception type and apply this rule to all routes in a RouteBuilder using onException ; or you can specify what action to take for all exception types and apply this rule to all routes in a RouteBuilder using errorHandler . For more details about exception handling, see Section 6.3, "Dead Letter Channel" . 2.3.1. onException Clause Overview The onException clause is a powerful mechanism for trapping exceptions that occur in one or more routes: it is type-specific, enabling you to define distinct actions to handle different exception types; it allows you to define actions using essentially the same (actually, slightly extended) syntax as a route, giving you considerable flexibility in the way you handle exceptions; and it is based on a trapping model, which enables a single onException clause to deal with exceptions occurring at any node in any route. Trapping exceptions using onException The onException clause is a mechanism for trapping , rather than catching exceptions. That is, once you define an onException clause, it traps exceptions that occur at any point in a route. This contrasts with the Java try/catch mechanism, where an exception is caught, only if a particular code fragment is explicitly enclosed in a try block. What really happens when you define an onException clause is that the Apache Camel runtime implicitly encloses each route node in a try block. This is why the onException clause is able to trap exceptions at any point in the route. But this wrapping is done for you automatically; it is not visible in the route definitions. Java DSL example In the following Java DSL example, the onException clause applies to all of the routes defined in the RouteBuilder class. If a ValidationException exception occurs while processing either of the routes ( from("seda:inputA") or from("seda:inputB") ), the onException clause traps the exception and redirects the current exchange to the validationFailed JMS queue (which serves as a deadletter queue). XML DSL example The preceding example can also be expressed in the XML DSL, using the onException element to define the exception clause, as follows: Trapping multiple exceptions You can define multiple onException clauses to trap exceptions in a RouteBuilder scope. This enables you to take different actions in response to different exceptions. For example, the following series of onException clauses defined in the Java DSL define different deadletter destinations for ValidationException , IOException , and Exception : You can define the same series of onException clauses in the XML DSL as follows: You can also group multiple exceptions together to be trapped by the same onException clause. In the Java DSL, you can group multiple exceptions as follows: In the XML DSL, you can group multiple exceptions together by defining more than one exception element inside the onException element, as follows: When trapping multiple exceptions, the order of the onException clauses is significant. Apache Camel initially attempts to match the thrown exception against the first clause. If the first clause fails to match, the onException clause is tried, and so on until a match is found. Each matching attempt is governed by the following algorithm: If the thrown exception is a chained exception (that is, where an exception has been caught and rethrown as a different exception), the most nested exception type serves initially as the basis for matching. This exception is tested as follows: If the exception-to-test has exactly the type specified in the onException clause (tested using instanceof ), a match is triggered. If the exception-to-test is a sub-type of the type specified in the onException clause, a match is triggered. If the most nested exception fails to yield a match, the exception in the chain (the wrapping exception) is tested instead. The testing continues up the chain until either a match is triggered or the chain is exhausted. Note The throwException EIP enables you to create a new exception instance from a simple language expression. You can make it dynamic, based on the available information from the current exchange. for example, Deadletter channel The basic examples of onException usage have so far all exploited the deadletter channel pattern. That is, when an onException clause traps an exception, the current exchange is routed to a special destination (the deadletter channel). The deadletter channel serves as a holding area for failed messages that have not been processed. An administrator can inspect the messages at a later time and decide what action needs to be taken. For more details about the deadletter channel pattern, see Section 6.3, "Dead Letter Channel" . Use original message By the time an exception is raised in the middle of a route, the message in the exchange could have been modified considerably (and might not even by readable by a human). Often, it is easier for an administrator to decide what corrective actions to take, if the messages visible in the deadletter queue are the original messages, as received at the start of the route. The useOriginalMessage option is false by default, but will be auto-enabled if it is configured on an error handler. Note The useOriginalMessage option can result in unexpected behavior when applied to Camel routes that send messages to multiple endpoints, or split messages into parts. The original message might not be preserved in a Multicast, Splitter, or RecipientList route in which intermediate processing steps modify the original message. In the Java DSL, you can replace the message in the exchange by the original message. Set the setAllowUseOriginalMessage() to true , then use the useOriginalMessage() DSL command, as follows: In the XML DSL, you can retrieve the original message by setting the useOriginalMessage attribute on the onException element, as follows: Note If the setAllowUseOriginalMessage() option is set to true , Camel makes a copy of the original message at the start of the route, which ensures that the original message is available when you call useOriginalMessage() . However, if the setAllowUseOriginalMessage() option is set to false (this is the default) on the Camel context, the original message will not be accessible and you cannot call useOriginalMessage() . A reasons to exploit the default behaviour is to optimize performance when processing large messages. In Camel versions prior to 2.18, the default setting of allowUseOriginalMessage is true. Redelivery policy Instead of interrupting the processing of a message and giving up as soon as an exception is raised, Apache Camel gives you the option of attempting to redeliver the message at the point where the exception occurred. In networked systems, where timeouts can occur and temporary faults arise, it is often possible for failed messages to be processed successfully, if they are redelivered shortly after the original exception was raised. The Apache Camel redelivery supports various strategies for redelivering messages after an exception occurs. Some of the most important options for configuring redelivery are as follows: maximumRedeliveries() Specifies the maximum number of times redelivery can be attempted (default is 0 ). A negative value means redelivery is always attempted (equivalent to an infinite value). retryWhile() Specifies a predicate (of Predicate type), which determines whether Apache Camel ought to continue redelivering. If the predicate evaluates to true on the current exchange, redelivery is attempted; otherwise, redelivery is stopped and no further redelivery attempts are made. This option takes precedence over the maximumRedeliveries() option. In the Java DSL, redelivery policy options are specified using DSL commands in the onException clause. For example, you can specify a maximum of six redeliveries, after which the exchange is sent to the validationFailed deadletter queue, as follows: In the XML DSL, redelivery policy options are specified by setting attributes on the redeliveryPolicy element. For example, the preceding route can be expressed in XML DSL as follows: The latter part of the route - after the redelivery options are set - is not processed until after the last redelivery attempt has failed. For detailed descriptions of all the redelivery options, see Section 6.3, "Dead Letter Channel" . Alternatively, you can specify redelivery policy options in a redeliveryPolicyProfile instance. You can then reference the redeliveryPolicyProfile instance using the onException element's redeliverPolicyRef attribute. For example, the preceding route can be expressed as follows: Note The approach using redeliveryPolicyProfile is useful, if you want to re-use the same redelivery policy in multiple onException clauses. Conditional trapping Exception trapping with onException can be made conditional by specifying the onWhen option. If you specify the onWhen option in an onException clause, a match is triggered only when the thrown exception matches the clause and the onWhen predicate evaluates to true on the current exchange. For example, in the following Java DSL fragment,the first onException clause triggers, only if the thrown exception matches MyUserException and the user header is non-null in the current exchange: The preceding onException clauses can be expressed in the XML DSL as follows: Handling exceptions By default, when an exception is raised in the middle of a route, processing of the current exchange is interrupted and the thrown exception is propagated back to the consumer endpoint at the start of the route. When an onException clause is triggered, the behavior is essentially the same, except that the onException clause performs some processing before the thrown exception is propagated back. But this default behavior is not the only way to handle an exception. The onException provides various options to modify the exception handling behavior, as follows: Suppressing exception rethrow - you have the option of suppressing the rethrown exception after the onException clause has completed. In other words, in this case the exception does not propagate back to the consumer endpoint at the start of the route. Continuing processing - you have the option of resuming normal processing of the exchange from the point where the exception originally occurred. Implicitly, this approach also suppresses the rethrown exception. Sending a response - in the special case where the consumer endpoint at the start of the route expects a reply (that is, having an InOut MEP), you might prefer to construct a custom fault reply message, rather than propagating the exception back to the consumer endpoint. Suppressing exception rethrow To prevent the current exception from being rethrown and propagated back to the consumer endpoint, you can set the handled() option to true in the Java DSL, as follows: In the Java DSL, the argument to the handled() option can be of boolean type, of Predicate type, or of Expression type (where any non-boolean expression is interpreted as true , if it evaluates to a non-null value). The same route can be configured to suppress the rethrown exception in the XML DSL, using the handled element, as follows: Continuing processing To continue processing the current message from the point in the route where the exception was originally thrown, you can set the continued option to true in the Java DSL, as follows: In the Java DSL, the argument to the continued() option can be of boolean type, of Predicate type, or of Expression type (where any non-boolean expression is interpreted as true , if it evaluates to a non-null value). The same route can be configured in the XML DSL, using the continued element, as follows: Sending a response When the consumer endpoint that starts a route expects a reply, you might prefer to construct a custom fault reply message, instead of simply letting the thrown exception propagate back to the consumer. There are two essential steps you need to follow in this case: suppress the rethrown exception using the handled option; and populate the exchange's Out message slot with a custom fault message. For example, the following Java DSL fragment shows how to send a reply message containing the text string, Sorry , whenever the MyFunctionalException exception occurs: If you are sending a fault response to the client, you will often want to incorporate the text of the exception message in the response. You can access the text of the current exception message using the exceptionMessage() builder method. For example, you can send a reply containing just the text of the exception message whenever the MyFunctionalException exception occurs, as follows: The exception message text is also accessible from the Simple language, through the exception.message variable. For example, you could embed the current exception text in a reply message, as follows: The preceding onException clause can be expressed in XML DSL as follows: Exception thrown while handling an exception An exception that gets thrown while handling an existing exception (in other words, one that gets thrown in the middle of processing an onException clause) is handled in a special way. Such an exception is handled by the special fallback exception handler, which handles the exception as follows: All existing exception handlers are ignored and processing fails immediately. The new exception is logged. The new exception is set on the exchange object. The simple strategy avoids complex failure scenarios which could otherwise end up with an onException clause getting locked into an infinite loop. Scopes The onException clauses can be effective in either of the following scopes: RouteBuilder scope - onException clauses defined as standalone statements inside a RouteBuilder.configure() method affect all of the routes defined in that RouteBuilder instance. On the other hand, these onException clauses have no effect whatsoever on routes defined inside any other RouteBuilder instance. The onException clauses must appear before the route definitions. All of the examples up to this point are defined using the RouteBuilder scope. Route scope - onException clauses can also be embedded directly within a route. These onException clauses affect only the route in which they are defined. Route scope You can embed an onException clause anywhere inside a route definition, but you must terminate the embedded onException clause using the end() DSL command. For example, you can define an embedded onException clause in the Java DSL, as follows: You can define an embedded onException clause in the XML DSL, as follows: 2.3.2. Error Handler Overview The errorHandler() clause provides similar features to the onException clause, except that this mechanism is not able to discriminate between different exception types. The errorHandler() clause is the original exception handling mechanism provided by Apache Camel and was available before the onException clause was implemented. Java DSL example The errorHandler() clause is defined in a RouteBuilder class and applies to all of the routes in that RouteBuilder class. It is triggered whenever an exception of any kind occurs in one of the applicable routes. For example, to define an error handler that routes all failed exchanges to the ActiveMQ deadLetter queue, you can define a RouteBuilder as follows: Redirection to the dead letter channel will not occur, however, until all attempts at redelivery have been exhausted. XML DSL example In the XML DSL, you define an error handler within a camelContext scope using the errorHandler element. For example, to define an error handler that routes all failed exchanges to the ActiveMQ deadLetter queue, you can define an errorHandler element as follows: Types of error handler Table 2.1, "Error Handler Types" provides an overview of the different types of error handler you can define. Table 2.1. Error Handler Types Java DSL Builder XML DSL Type Attribute Description defaultErrorHandler() DefaultErrorHandler Propagates exceptions back to the caller and supports the redelivery policy, but it does not support a dead letter queue. deadLetterChannel() DeadLetterChannel Supports the same features as the default error handler and, in addition, supports a dead letter queue. loggingErrorChannel() LoggingErrorChannel Logs the exception text whenever an exception occurs. noErrorHandler() NoErrorHandler Dummy handler implementation that can be used to disable the error handler. TransactionErrorHandler An error handler for transacted routes. A default transaction error handler instance is automatically used for a route that is marked as transacted. 2.3.3. doTry, doCatch, and doFinally Overview To handle exceptions within a route, you can use a combination of the doTry , doCatch , and doFinally clauses, which handle exceptions in a similar way to Java's try , catch , and finally blocks. Similarities between doCatch and Java catch In general, the doCatch() clause in a route definition behaves in an analogous way to the catch() statement in Java code. In particular, the following features are supported by the doCatch() clause: Multiple doCatch clauses - you can have multiple doCatch clauses within a single doTry block. The doCatch clauses are tested in the order they appear, just like Java catch() statements. Apache Camel executes the first doCatch clause that matches the thrown exception. Note This algorithm is different from the exception matching algorithm used by the onException clause - see Section 2.3.1, "onException Clause" for details. Rethrowing exceptions - you can rethrow the current exception from within a doCatch clause using constructs (see the section called "Rethrowing exceptions in doCatch" ). Special features of doCatch There are some special features of the doCatch() clause, however, that have no analogue in the Java catch() statement. The following feature is specific to doCatch() : Conditional catching - you can catch an exception conditionally, by appending an onWhen sub-clause to the doCatch clause (see the section called "Conditional exception catching using onWhen" ). Example The following example shows how to write a doTry block in the Java DSL, where the doCatch() clause will be executed, if either the IOException exception or the IllegalStateException exception are raised, and the doFinally() clause is always executed, irrespective of whether an exception is raised or not. Or equivalently, in Spring XML: Rethrowing exceptions in doCatch It is possible to rethrow an exception in a doCatch() clause, using constructs, as follows: Note You can also re-throw an exception using a processor instead of handled(false) which is deprecated in a doTry/doCatch clause: In the preceding example, if the IOException is caught by doCatch() , the current exchange is sent to the mock:io endpoint, and then the IOException is rethrown. This gives the consumer endpoint at the start of the route (in the from() command) an opportunity to handle the exception as well. The following example shows how to define the same route in Spring XML: Conditional exception catching using onWhen A special feature of the Apache Camel doCatch() clause is that you can conditionalize the catching of exceptions based on an expression that is evaluated at run time. In other words, if you catch an exception using a clause of the form, doCatch( ExceptionList ).doWhen( Expression ) , an exception will only be caught, if the predicate expression, Expression , evaluates to true at run time. For example, the following doTry block will catch the exceptions, IOException and IllegalStateException , only if the exception message contains the word, Severe : Or equivalently, in Spring XML: Nested Conditions in doTry There are various options available to add Camel exception handling to a JavaDSL route. dotry() creates a try or catch block for handling exceptions and is useful for route specific error handling. If you want to catch the exception inside of ChoiceDefinition , you can use the following doTry blocks: 2.3.4. Propagating SOAP Exceptions Overview The Camel CXF component provides an integration with Apache CXF, enabling you to send and receive SOAP messages from Apache Camel endpoints. You can easily define Apache Camel endpoints in XML, which can then be referenced in a route using the endpoint's bean ID. For more details, see CXF in the Apache Camel Component Reference Guide . How to propagate stack trace information It is possible to configure a CXF endpoint so that, when a Java exception is thrown on the server side, the stack trace for the exception is marshalled into a fault message and returned to the client. To enable this feaure, set the dataFormat to PAYLOAD and set the faultStackTraceEnabled property to true in the cxfEndpoint element, as follows: For security reasons, the stack trace does not include the causing exception (that is, the part of a stack trace that follows Caused by ). If you want to include the causing exception in the stack trace, set the exceptionMessageCauseEnabled property to true in the cxfEndpoint element, as follows: Warning You should only enable the exceptionMessageCauseEnabled flag for testing and diagnostic purposes. It is normal practice for servers to conceal the original cause of an exception to make it harder for hostile users to probe the server. 2.4. Bean Integration Overview Bean integration provides a general purpose mechanism for processing messages using arbitrary Java objects. By inserting a bean reference into a route, you can call an arbitrary method on a Java object, which can then access and modify the incoming exchange. The mechanism that maps an exchange's contents to the parameters and return values of a bean method is known as parameter binding . Parameter binding can use any combination of the following approaches in order to initialize a method's parameters: Conventional method signatures - If the method signature conforms to certain conventions, the parameter binding can use Java reflection to determine what parameters to pass. Annotations and dependency injection - For a more flexible binding mechanism, employ Java annotations to specify what to inject into the method's arguments. This dependency injection mechanism relies on Spring 2.5 component scanning. Normally, if you are deploying your Apache Camel application into a Spring container, the dependency injection mechanism will work automatically. Explicitly specified parameters - You can specify parameters explicitly (either as constants or using the Simple language), at the point where the bean is invoked. Bean registry Beans are made accessible through a bean registry , which is a service that enables you to look up beans using either the class name or the bean ID as a key. The way that you create an entry in the bean registry depends on the underlying framework - for example, plain Java, Spring, Guice, or Blueprint. Registry entries are usually created implicitly (for example, when you instantiate a Spring bean in a Spring XML file). Registry plug-in strategy Apache Camel implements a plug-in strategy for the bean registry, defining an integration layer for accessing beans which makes the underlying registry implementation transparent. Hence, it is possible to integrate Apache Camel applications with a variety of different bean registries, as shown in Table 2.2, "Registry Plug-Ins" . Table 2.2. Registry Plug-Ins Registry Implementation Camel Component with Registry Plug-In Spring bean registry camel-spring Guice bean registry camel-guice Blueprint bean registry camel-blueprint OSGi service registry deployed in OSGi container JNDI registry Normally, you do not have to worry about configuring bean registries, because the relevant bean registry is automatically installed for you. For example, if you are using the Spring framework to define your routes, the Spring ApplicationContextRegistry plug-in is automatically installed in the current CamelContext instance. Deployment in an OSGi container is a special case. When an Apache Camel route is deployed into the OSGi container, the CamelContext automatically sets up a registry chain for resolving bean instances: the registry chain consists of the OSGi registry, followed by the Blueprint (or Spring) registry. Accessing a bean created in Java To process exchange objects using a Java bean (which is a plain old Java object or POJO), use the bean() processor, which binds the inbound exchange to a method on the Java object. For example, to process inbound exchanges using the class, MyBeanProcessor , define a route like the following: Where the bean() processor creates an instance of MyBeanProcessor type and invokes the processBody() method to process inbound exchanges. This approach is adequate if you only want to access the MyBeanProcessor instance from a single route. However, if you want to access the same MyBeanProcessor instance from multiple routes, use the variant of bean() that takes the Object type as its first argument. For example: Accessing overloaded bean methods If a bean defines overloaded methods, you can choose which of the overloaded methods to invoke by specifying the method name along with its parameter types. For example, if the MyBeanBrocessor class has two overloaded methods, processBody(String) and processBody(String,String) , you can invoke the latter overloaded method as follows: Alternatively, if you want to identify a method by the number of parameters it takes, rather than specifying the type of each parameter explicitly, you can use the wildcard character, * . For example, to invoke a method named processBody that takes two parameters, irrespective of the exact type of the parameters, invoke the bean() processor as follows: When specifying the method, you can use either a simple unqualified type name-for example, processBody(Exchange) -or a fully qualified type name-for example, processBody(org.apache.camel.Exchange) . Note In the current implementation, the specified type name must be an exact match of the parameter type. Type inheritance is not taken into account. Specify parameters explicitly You can specify parameter values explicitly, when you call the bean method. The following simple type values can be passed: Boolean: true or false . Numeric: 123 , 7 , and so on. String: 'In single quotes' or "In double quotes" . Null object: null . The following example shows how you can mix explicit parameter values with type specifiers in the same method invocation: In the preceding example, the value of the first parameter would presumably be determined by a parameter binding annotation (see the section called "Basic annotations" ). In addition to the simple type values, you can also specify parameter values using the Simple language ( Chapter 30, The Simple Language ). This means that the full power of the Simple language is available when specifying parameter values. For example, to pass the message body and the value of the title header to a bean method: You can also pass the entire header hash map as a parameter. For example, in the following example, the second method parameter must be declared to be of type java.util.Map : Note From Apache Camel 2.19 release, returning null from a bean method call now always ensures the message body has been set as a null value. Basic method signatures To bind exchanges to a bean method, you can define a method signature that conforms to certain conventions. In particular, there are two basic conventions for method signatures: Method signature for processing message bodies Method signature for processing exchanges Method signature for processing message bodies If you want to implement a bean method that accesses or modifies the incoming message body, you must define a method signature that takes a single String argument and returns a String value. For example: Method signature for processing exchanges For greater flexibility, you can implement a bean method that accesses the incoming exchange. This enables you to access or modify all headers, bodies, and exchange properties. For processing exchanges, the method signature takes a single org.apache.camel.Exchange parameter and returns void . For example: Accessing a Spring bean from Spring XML Instead of creating a bean instance in Java, you can create an instance using Spring XML. In fact, this is the only feasible approach if you are defining your routes in XML. To define a bean in XML, use the standard Spring bean element. The following example shows how to create an instance of MyBeanProcessor : It is also possible to pass data to the bean's constructor arguments using Spring syntax. For full details of how to use the Spring bean element, see The IoC Container from the Spring reference guide. When you create an object instance using the bean element, you can reference it later using the bean's ID (the value of the bean element's id attribute). For example, given the bean element with ID equal to myBeanId , you can reference the bean in a Java DSL route using the beanRef() processor, as follows: Where the beanRef() processor invokes the MyBeanProcessor.processBody() method on the specified bean instance. You can also invoke the bean from within a Spring XML route, using the Camel schema's bean element. For example: For a slight efficiency gain, you can set the cache option to true , which avoids looking up the registry every time a bean is used. For example, to enable caching, you can set the cache attribute on the bean element as follows: Accessing a Spring bean from Java When you create an object instance using the Spring bean element, you can reference it from Java using the bean's ID (the value of the bean element's id attribute). For example, given the bean element with ID equal to myBeanId , you can reference the bean in a Java DSL route using the beanRef() processor, as follows: Alternatively, you can reference the Spring bean by injection, using the @BeanInject annotation as follows: If you omit the bean ID from the @BeanInject annotation, Camel looks up the registry by type, but this only works if there is just a single bean of the given type. For example, to look up and inject the bean of com.acme.MyBeanProcessor type: Bean shutdown order in Spring XML For the beans used by a Camel context, the correct shutdown order is usually: Shut down the camelContext instance, followed by; Shut down the used beans. If this shutdown order is reversed, then it could happen that the Camel context tries to access a bean that is already destroyed (either leading directly to an error; or the Camel context tries to create the missing bean while it is being destroyed, which also causes an error). The default shutdown order in Spring XML depends on the order in which the beans and the camelContext appear in the Spring XML file. In order to avoid random errors due to incorrect shutdown order, therefore, the camelContext is configured to shut down before any of the other beans in the Spring XML file. This is the default behaviour since Apache Camel 2.13.0. If you need to change this behaviour (so that the Camel context is not forced to shut down before the other beans), you can set the shutdownEager attribute on the camelContext element to false . In this case, you could potentially exercise more fine-grained control over shutdown order using the Spring depends-on attribute. Parameter binding annotations The basic parameter bindings described in the section called "Basic method signatures" might not always be convenient to use. For example, if you have a legacy Java class that performs some data manipulation, you might want to extract data from an inbound exchange and map it to the arguments of an existing method signature. For this kind of parameter binding, Apache Camel provides the following kinds of Java annotation: Basic annotations Language annotations Inherited annotations Basic annotations Table 2.3, "Basic Bean Annotations" shows the annotations from the org.apache.camel Java package that you can use to inject message data into the arguments of a bean method. Table 2.3. Basic Bean Annotations Annotation Meaning Parameter? @Attachments Binds to a list of attachments. @Body Binds to an inbound message body. @Header Binds to an inbound message header. String name of the header. @Headers Binds to a java.util.Map of the inbound message headers. @OutHeaders Binds to a java.util.Map of the outbound message headers. @Property Binds to a named exchange property. String name of the property. @Properties Binds to a java.util.Map of the exchange properties. For example, the following class shows you how to use basic annotations to inject message data into the processExchange() method arguments. Notice how you are able to mix the annotations with the default conventions. As well as injecting the annotated arguments, the parameter binding also automatically injects the exchange object into the org.apache.camel.Exchange argument. Expression language annotations The expression language annotations provide a powerful mechanism for injecting message data into a bean method's arguments. Using these annotations, you can invoke an arbitrary script, written in the scripting language of your choice, to extract data from an inbound exchange and inject the data into a method argument. Table 2.4, "Expression Language Annotations" shows the annotations from the org.apache.camel.language package (and sub-packages, for the non-core annotations) that you can use to inject message data into the arguments of a bean method. Table 2.4. Expression Language Annotations Annotation Description @Bean Injects a Bean expression. @Constant Injects a Constant expression @EL Injects an EL expression. @Groovy Injects a Groovy expression. @Header Injects a Header expression. @JavaScript Injects a JavaScript expression. @OGNL Injects an OGNL expression. @PHP Injects a PHP expression. @Python Injects a Python expression. @Ruby Injects a Ruby expression. @Simple Injects a Simple expression. @XPath Injects an XPath expression. @XQuery Injects an XQuery expression. For example, the following class shows you how to use the @XPath annotation to extract a username and a password from the body of an incoming message in XML format: The @Bean annotation is a special case, because it enables you to inject the result of invoking a registered bean. For example, to inject a correlation ID into a method argument, you can use the @Bean annotation to invoke an ID generator class, as follows: Where the string, myCorrIdGenerator , is the bean ID of the ID generator instance. The ID generator class can be instantiated using the spring bean element, as follows: Where the MyIdGenerator class could be defined as follows: Notice that you can also use annotations in the referenced bean class, MyIdGenerator . The only restriction on the generate() method signature is that it must return the correct type to inject into the argument annotated by @Bean . Because the @Bean annotation does not let you specify a method name, the injection mechanism simply invokes the first method in the referenced bean that has the matching return type. Note Some of the language annotations are available in the core component ( @Bean , @Constant , @Simple , and @XPath ). For non-core components, however, you will have to make sure that you load the relevant component. For example, to use the OGNL script, you must load the camel-ognl component. Inherited annotations Parameter binding annotations can be inherited from an interface or from a superclass. For example, if you define a Java interface with a Header annotation and a Body annotation, as follows: The overloaded methods defined in the implementation class, MyBeanProcessor , now inherit the annotations defined in the base interface, as follows: Interface implementations The class that implements a Java interface is often protected , private or in package-only scope. If you try to invoke a method on an implementation class that is restricted in this way, the bean binding falls back to invoking the corresponding interface method, which is publicly accessible. For example, consider the following public BeanIntf interface: Where the BeanIntf interface is implemented by the following protected BeanIntfImpl class: The following bean invocation would fall back to invoking the public BeanIntf.processBodyAndHeader method: Invoking static methods Bean integration has the capability to invoke static methods without creating an instance of the associated class. For example, consider the following Java class that defines the static method, changeSomething() : You can use bean integration to invoke the static changeSomething method, as follows: Note that, although this syntax looks identical to the invocation of an ordinary function, bean integration exploits Java reflection to identify the method as static and proceeds to invoke the method without instantiating MyStaticClass . Invoking an OSGi service In the special case where a route is deployed into a Red Hat Fuse container, it is possible to invoke an OSGi service directly using bean integration. For example, assuming that one of the bundles in the OSGi container has exported the service, org.fusesource.example.HelloWorldOsgiService , you could invoke the sayHello method using the following bean integration code: You could also invoke the OSGi service from within a Spring or blueprint XML file, using the bean component, as follows: The way this works is that Apache Camel sets up a chain of registries when it is deployed in the OSGi container. First of all, it looks up the specified class name in the OSGi service registry; if this lookup fails, it then falls back to the local Spring DM or blueprint registry. 2.5. Creating Exchange Instances Overview When processing messages with Java code (for example, in a bean class or in a processor class), it is often necessary to create a fresh exchange instance. If you need to create an Exchange object, the easiest approach is to invoke the methods of the ExchangeBuilder class, as described here. ExchangeBuilder class The fully qualified name of the ExchangeBuilder class is as follows: The ExchangeBuilder exposes the static method, anExchange , which you can use to start building an exchange object. Example For example, the following code creates a new exchange object containing the message body string, Hello World! , and with headers containing username and password credentials: ExchangeBuilder methods The ExchangeBuilder class supports the following methods: ExchangeBuilder anExchange(CamelContext context) (static method) Initiate building an exchange object. Exchange build() Build the exchange. ExchangeBuilder withBody(Object body) Set the message body on the exchange (that is, sets the exchange's In message body). ExchangeBuilder withHeader(String key, Object value) Set a header on the exchange (that is, sets a header on the exchange's In message). ExchangeBuilder withPattern(ExchangePattern pattern) Sets the exchange pattern on the exchange. ExchangeBuilder withProperty(String key, Object value) Sets a property on the exchange. 2.6. Transforming Message Content Abstract Apache Camel supports a variety of approaches to transforming message content. In addition to a simple native API for modifying message content, Apache Camel supports integration with several different third-party libraries and transformation standards. 2.6.1. Simple Message Transformations Overview The Java DSL has a built-in API that enables you to perform simple transformations on incoming and outgoing messages. For example, the rule shown in Example 2.1, "Simple Transformation of Incoming Messages" appends the text, World! , to the end of the incoming message body. Example 2.1. Simple Transformation of Incoming Messages Where the setBody() command replaces the content of the incoming message's body. API for simple transformations You can use the following API classes to perform simple transformations of the message content in a router rule: org.apache.camel.model.ProcessorDefinition org.apache.camel.builder.Builder org.apache.camel.builder.ValueBuilder ProcessorDefinition class The org.apache.camel.model.ProcessorDefinition class defines the DSL commands you can insert directly into a router rule - for example, the setBody() command in Example 2.1, "Simple Transformation of Incoming Messages" . Table 2.5, "Transformation Methods from the ProcessorDefinition Class" shows the ProcessorDefinition methods that are relevant to transforming message content: Table 2.5. Transformation Methods from the ProcessorDefinition Class Method Description Type convertBodyTo(Class type) Converts the IN message body to the specified type. Type removeFaultHeader(String name) Adds a processor which removes the header on the FAULT message. Type removeHeader(String name) Adds a processor which removes the header on the IN message. Type removeProperty(String name) Adds a processor which removes the exchange property. ExpressionClause<ProcessorDefinition<Type>> setBody() Adds a processor which sets the body on the IN message. Type setFaultBody(Expression expression) Adds a processor which sets the body on the FAULT message. Type setFaultHeader(String name, Expression expression) Adds a processor which sets the header on the FAULT message. ExpressionClause<ProcessorDefinition<Type>> setHeader(String name) Adds a processor which sets the header on the IN message. Type setHeader(String name, Expression expression) Adds a processor which sets the header on the IN message. ExpressionClause<ProcessorDefinition<Type>> setOutHeader(String name) Adds a processor which sets the header on the OUT message. Type setOutHeader(String name, Expression expression) Adds a processor which sets the header on the OUT message. ExpressionClause<ProcessorDefinition<Type>> setProperty(String name) Adds a processor which sets the exchange property. Type setProperty(String name, Expression expression) Adds a processor which sets the exchange property. ExpressionClause<ProcessorDefinition<Type>> transform() Adds a processor which sets the body on the OUT message. Type transform(Expression expression) Adds a processor which sets the body on the OUT message. Builder class The org.apache.camel.builder.Builder class provides access to message content in contexts where expressions or predicates are expected. In other words, Builder methods are typically invoked in the arguments of DSL commands - for example, the body() command in Example 2.1, "Simple Transformation of Incoming Messages" . Table 2.6, "Methods from the Builder Class" summarizes the static methods available in the Builder class. Table 2.6. Methods from the Builder Class Method Description static <E extends Exchange> ValueBuilder<E> body() Returns a predicate and value builder for the inbound body on an exchange. static <E extends Exchange,T> ValueBuilder<E> bodyAs(Class<T> type) Returns a predicate and value builder for the inbound message body as a specific type. static <E extends Exchange> ValueBuilder<E> constant(Object value) Returns a constant expression. static <E extends Exchange> ValueBuilder<E> faultBody() Returns a predicate and value builder for the fault body on an exchange. static <E extends Exchange,T> ValueBuilder<E> faultBodyAs(Class<T> type) Returns a predicate and value builder for the fault message body as a specific type. static <E extends Exchange> ValueBuilder<E> header(String name) Returns a predicate and value builder for headers on an exchange. static <E extends Exchange> ValueBuilder<E> outBody() Returns a predicate and value builder for the outbound body on an exchange. static <E extends Exchange> ValueBuilder<E> outBodyAs(Class<T> type) Returns a predicate and value builder for the outbound message body as a specific type. static ValueBuilder property(String name) Returns a predicate and value builder for properties on an exchange. static ValueBuilder regexReplaceAll(Expression content, String regex, Expression replacement) Returns an expression that replaces all occurrences of the regular expression with the given replacement. static ValueBuilder regexReplaceAll(Expression content, String regex, String replacement) Returns an expression that replaces all occurrences of the regular expression with the given replacement. static ValueBuilder sendTo(String uri) Returns an expression sending the exchange to the given endpoint uri. static <E extends Exchange> ValueBuilder<E> systemProperty(String name) Returns an expression for the given system property. static <E extends Exchange> ValueBuilder<E> systemProperty(String name, String defaultValue) Returns an expression for the given system property. ValueBuilder class The org.apache.camel.builder.ValueBuilder class enables you to modify values returned by the Builder methods. In other words, the methods in ValueBuilder provide a simple way of modifying message content. Table 2.7, "Modifier Methods from the ValueBuilder Class" summarizes the methods available in the ValueBuilder class. That is, the table shows only the methods that are used to modify the value they are invoked on (for full details, see the API Reference documentation). Table 2.7. Modifier Methods from the ValueBuilder Class Method Description ValueBuilder<E> append(Object value) Appends the string evaluation of this expression with the given value. Predicate contains(Object value) Create a predicate that the left hand expression contains the value of the right hand expression. ValueBuilder<E> convertTo(Class type) Converts the current value to the given type using the registered type converters. ValueBuilder<E> convertToString() Converts the current value a String using the registered type converters. Predicate endsWith(Object value) <T> T evaluate(Exchange exchange, Class<T> type) Predicate in(Object... values) Predicate in(Predicate... predicates) Predicate isEqualTo(Object value) Returns true, if the current value is equal to the given value argument. Predicate isGreaterThan(Object value) Returns true, if the current value is greater than the given value argument. Predicate isGreaterThanOrEqualTo(Object value) Returns true, if the current value is greater than or equal to the given value argument. Predicate isInstanceOf(Class type) Returns true, if the current value is an instance of the given type. Predicate isLessThan(Object value) Returns true, if the current value is less than the given value argument. Predicate isLessThanOrEqualTo(Object value) Returns true, if the current value is less than or equal to the given value argument. Predicate isNotEqualTo(Object value) Returns true, if the current value is not equal to the given value argument. Predicate isNotNull() Returns true, if the current value is not null . Predicate isNull() Returns true, if the current value is null . Predicate matches(Expression expression) Predicate not(Predicate predicate) Negates the predicate argument. ValueBuilder prepend(Object value) Prepends the string evaluation of this expression to the given value. Predicate regex(String regex) ValueBuilder<E> regexReplaceAll(String regex, Expression<E> replacement) Replaces all occurrencies of the regular expression with the given replacement. ValueBuilder<E> regexReplaceAll(String regex, String replacement) Replaces all occurrencies of the regular expression with the given replacement. ValueBuilder<E> regexTokenize(String regex) Tokenizes the string conversion of this expression using the given regular expression. ValueBuilder sort(Comparator comparator) Sorts the current value using the given comparator. Predicate startsWith(Object value) Returns true, if the current value matches the string value of the value argument. ValueBuilder<E> tokenize() Tokenizes the string conversion of this expression using the comma token separator. ValueBuilder<E> tokenize(String token) Tokenizes the string conversion of this expression using the given token separator. 2.6.2. Marshalling and Unmarshalling Java DSL commands You can convert between low-level and high-level message formats using the following commands: marshal() - Converts a high-level data format to a low-level data format. unmarshal() - Converts a low-level data format to a high-level data format. Data formats Apache Camel supports marshalling and unmarshalling of the following data formats: Java serialization JAXB XMLBeans XStream Java serialization Enables you to convert a Java object to a blob of binary data. For this data format, unmarshalling converts a binary blob to a Java object, and marshalling converts a Java object to a binary blob. For example, to read a serialized Java object from an endpoint, SourceURL , and convert it to a Java object, you use a rule like the following: Or alternatively, in Spring XML: JAXB Provides a mapping between XML schema types and Java types (see https://jaxb.dev.java.net/ ). For JAXB, unmarshalling converts an XML data type to a Java object, and marshalling converts a Java object to an XML data type. Before you can use JAXB data formats, you must compile your XML schema using a JAXB compiler to generate the Java classes that represent the XML data types in the schema. This is called binding the schema. After the schema is bound, you define a rule to unmarshal XML data to a Java object, using code like the following: where GeneratedPackagename is the name of the Java package generated by the JAXB compiler, which contains the Java classes representing your XML schema. Or alternatively, in Spring XML: XMLBeans Provides an alternative mapping between XML schema types and Java types (see http://xmlbeans.apache.org/ ). For XMLBeans, unmarshalling converts an XML data type to a Java object and marshalling converts a Java object to an XML data type. For example, to unmarshal XML data to a Java object using XMLBeans, you use code like the following: Or alternatively, in Spring XML: XStream Provides another mapping between XML types and Java types (see http://www.xml.com/pub/a/2004/08/18/xstream.html ). XStream is a serialization library (like Java serialization), enabling you to convert any Java object to XML. For XStream, unmarshalling converts an XML data type to a Java object, and marshalling converts a Java object to an XML data type. Note The XStream data format is currently not supported in Spring XML. 2.6.3. Endpoint Bindings What is a binding? In Apache Camel, a binding is a way of wrapping an endpoint in a contract - for example, by applying a Data Format, a Content Enricher or a validation step. A condition or transformation is applied to the messages coming in, and a complementary condition or transformation is applied to the messages going out. DataFormatBinding The DataFormatBinding class is useful for the specific case where you want to define a binding that marshals and unmarshals a particular data format (see Section 2.6.2, "Marshalling and Unmarshalling" ). In this case, all that you need to do to create a binding is to create a DataFormatBinding instance, passing a reference to the relevant data format in the constructor. For example, the XML DSL snippet in Example 2.2, "JAXB Binding" shows a binding (with ID, jaxb ) that is capable of marshalling and unmarshalling the JAXB data format when it is associated with an Apache Camel endpoint: Example 2.2. JAXB Binding Associating a binding with an endpoint The following alternatives are available for associating a binding with an endpoint: Binding URI Component Binding URI To associate a binding with an endpoint, you can prefix the endpoint URI with binding: NameOfBinding , where NameOfBinding is the bean ID of the binding (for example, the ID of a binding bean created in Spring XML). For example, the following example shows how to associate ActiveMQ endpoints with the JAXB binding defined in Example 2.2, "JAXB Binding" . BindingComponent Instead of using a prefix to associate a binding with an endpoint, you can make the association implicit, so that the binding does not need to appear in the URI. For existing endpoints that do not have an implicit binding, the easiest way to achieve this is to wrap the endpoint using the BindingComponent class. For example, to associate the jaxb binding with activemq endpoints, you could define a new BindingComponent instance as follows: Where the (optional) second constructor argument to jaxbmq defines a URI prefix. You can now use the jaxbmq ID as the scheme for an endpoint URI. For example, you can define the following route using this binding component: The preceding route is equivalent to the following route, which uses the binding URI approach: Note For developers that implement a custom Apache Camel component, it is possible to achieve this by implementing an endpoint class that inherits from the org.apache.camel.spi.HasBinding interface. BindingComponent constructors The BindingComponent class supports the following constructors: public BindingComponent() No arguments form. Use property injection to configure the binding component instance. public BindingComponent(Binding binding) Associate this binding component with the specified Binding object, binding . public BindingComponent(Binding binding, String uriPrefix) Associate this binding component with the specified Binding object, binding , and URI prefix, uriPrefix . This is the most commonly used constructor. public BindingComponent(Binding binding, String uriPrefix, String uriPostfix) This constructor supports the additional URI post-fix, uriPostfix , argument, which is automatically appended to any URIs defined using this binding component. Implementing a custom binding In addition to the DataFormatBinding , which is used for marshalling and unmarshalling data formats, you can implement your own custom bindings. Define a custom binding as follows: Implement an org.apache.camel.Processor class to perform a transformation on messages incoming to a consumer endpoint (appearing in a from element). Implement a complementary org.apache.camel.Processor class to perform the reverse transformation on messages outgoing from a producer endpoint (appearing in a to element). Implement the org.apache.camel.spi.Binding interface, which acts as a factory for the processor instances. Binding interface Example 2.3, "The org.apache.camel.spi.Binding Interface" shows the definition of the org.apache.camel.spi.Binding interface, which you must implement to define a custom binding. Example 2.3. The org.apache.camel.spi.Binding Interface When to use bindings Bindings are useful when you need to apply the same kind of transformation to many different kinds of endpoint. 2.7. Property Placeholders Overview The property placeholders feature can be used to substitute strings into various contexts (such as endpoint URIs and attributes in XML DSL elements), where the placeholder settings are stored in Java properties files. This feature can be useful, if you want to share settings between different Apache Camel applications or if you want to centralize certain configuration settings. For example, the following route sends requests to a Web server, whose host and port are substituted by the placeholders, {{remote.host}} and {{remote.port}} : The placeholder values are defined in a Java properties file, as follows: Note Property Placeholders support an encoding option that enables you to read the .properties file, using a specific character set such as UTF-8. However, by default, it implements the ISO-8859-1 character set. Apache Camel using the PropertyPlaceholders support the following: Specify the default value together with the key to lookup. No need to define the PropertiesComponent , if all the placeholder keys consist of default values, which are to be used. Use third-party functions to lookup the property values. It enables you to implement your own logic. Note Provide three out of the box functions to lookup values from OS environmental variable, JVM system properties, or the service name idiom. Property files Property settings are stored in one or more Java properties files and must conform to the standard Java properties file format. Each property setting appears on its own line, in the format Key = Value . Lines with # or ! as the first non-blank character are treated as comments. For example, a property file could have content as shown in Example 2.4, "Sample Property File" . Example 2.4. Sample Property File Resolving properties The properties component must be configured with the locations of one or more property files before you can start using it in route definitions. You must provide the property values using one of the following resolvers: classpath: PathName , PathName ,... (Default) Specifies locations on the classpath, where PathName is a file pathname delimited using forward slashes. file: PathName , PathName ,... Specifies locations on the file system, where PathName is a file pathname delimited using forward slashes. ref: BeanID Specifies the ID of a java.util.Properties object in the registry. blueprint: BeanID Specifies the ID of a cm:property-placeholder bean, which is used in the context of an OSGi blueprint file to access properties defined in the OSGi Configuration Admin service. For details, see the section called "Integration with OSGi blueprint property placeholders" . For example, to specify the com/fusesource/cheese.properties property file and the com/fusesource/bar.properties property file, both located on the classpath, you would use the following location string: Note You can omit the classpath: prefix in this example, because the classpath resolver is used by default. Specifying locations using system properties and environment variables You can embed Java system properties and O/S environment variables in a location PathName . Java system properties can be embedded in a location resolver using the syntax, USD{ PropertyName } . For example, if the root directory of Red Hat Fuse is stored in the Java system property, karaf.home , you could embed that directory value in a file location, as follows: O/S environment variables can be embedded in a location resolver using the syntax, USD{env: VarName } . For example, if the root directory of JBoss Fuse is stored in the environment variable, SMX_HOME , you could embed that directory value in a file location, as follows: Configuring the properties component Before you can start using property placeholders, you must configure the properties component, specifying the locations of one or more property files. In the Java DSL, you can configure the properties component with the property file locations, as follows: As shown in the addComponent() call, the name of the properties component must be set to properties . In the XML DSL, you can configure the properties component using the dedicated propertyPlacholder element, as follows: If you want the properties component to ignore any missing .properties files when it is being initialized, you can set the ignoreMissingLocation option to true (normally, a missing .properties file would result in an error being raised). Additionally, if you want the properties component to ignore any missing locations that are specified using Java system properties or O/S environment variables, you can set the ignoreMissingLocation option to true . Placeholder syntax After it is configured, the property component automatically substitutes placeholders (in the appropriate contexts). The syntax of a placeholder depends on the context, as follows: In endpoint URIs and in Spring XML files - the placeholder is specified as {{ Key }} . When setting XML DSL attributes - xs:string attributes are set using the following syntax: Other attribute types (for example, xs:int or xs:boolean ) must be set using the following syntax: Where prop is associated with the http://camel.apache.org/schema/placeholder namespace. When setting Java DSL EIP options - to set an option on an Enterprise Integration Pattern (EIP) command in the Java DSL, add a placeholder() clause like the following to the fluent DSL: In Simple language expressions - the placeholder is specified as USD{properties: Key } . Substitution in endpoint URIs Wherever an endpoint URI string appears in a route, the first step in parsing the endpoint URI is to apply the property placeholder parser. The placeholder parser automatically substitutes any property names appearing between double braces, {{ Key }} . For example, given the property settings shown in Example 2.4, "Sample Property File" , you could define a route as follows: By default, the placeholder parser looks up the properties bean ID in the registry to find the properties component. If you prefer, you can explicitly specify the scheme in the endpoint URIs. For example, by prefixing properties: to each of the endpoint URIs, you can define the following equivalent route: When specifying the scheme explicitly, you also have the option of specifying options to the properties component. For example, to override the property file location, you could set the location option as follows: Substitution in Spring XML files You can also use property placeholders in the XML DSL, for setting various attributes of the DSL elements. In this context, the placholder syntax also uses double braces, {{ Key }} . For example, you could define a jmxAgent element using property placeholders, as follows: Substitution of XML DSL attribute values You can use the regular placeholder syntax for specifying attribute values of xs:string type - for example, <jmxAgent registryPort="{{myjmx.port}}" ... > . But for attributes of any other type (for example, xs:int or xs:boolean ), you must use the special syntax, prop: AttributeName =" Key " . For example, given that a property file defines the stop.flag property to have the value, true , you can use this property to set the stopOnException boolean attribute, as follows: Important The prop prefix must be explicitly assigned to the http://camel.apache.org/schema/placeholder namespace in your Spring file, as shown in the beans element of the preceding example. Substitution of Java DSL EIP options When invoking an EIP command in the Java DSL, you can set any EIP option using the value of a property placeholder, by adding a sub-clause of the form, placeholder(" OptionName ", " Key ") . For example, given that a property file defines the stop.flag property to have the value, true , you can use this property to set the stopOnException option of the multicast EIP, as follows: Substitution in Simple language expressions You can also substitute property placeholders in Simple language expressions, but in this case the syntax of the placeholder is USD{properties: Key } . For example, you can substitute the cheese.quote placeholder inside a Simple expression, as follows: You can specify a default value for the property, using the syntax, USD{properties: Key : DefaultVal } . For example: It is also possible to override the location of the property file using the syntax, USD{properties-location: Location : Key } . For example, to substitute the bar.quote placeholder using the settings from the com/mycompany/bar.properties property file, you can define a Simple expression as follows: Using Property Placeholders in the XML DSL In older releases, the xs:string type attributes were used to support placeholders in the XML DSL. For example, the timeout attribute would be a xs:int type. Therefore, you cannot set a string value as the placeholder key. From Apache Camel 2.7 onwards, this is now possible by using a special placeholder namespace. The following example illustrates the prop prefix for the namespace. It enables you to use the prop prefix in the attributes in the XML DSLs. Note In the Multicast, set the option stopOnException as the value of the placeholder with the key stop . Also, in the properties file, define the value as Integration with OSGi blueprint property placeholders If you deploy your route into the Red Hat Fuse OSGi container, you can integrate the Apache Camel property placeholder mechanism with JBoss Fuse's blueprint property placeholder mechanism (in fact, the integration is enabled by default). There are two basic approaches to setting up the integration, as follows: Implicit blueprint integration Explicit blueprint integration Implicit blueprint integration If you define a camelContext element inside an OSGi blueprint file, the Apache Camel property placeholder mechanism automatically integrates with the blueprint property placeholder mechanism. That is, placeholders obeying the Apache Camel syntax (for example, {{cool.end}} ) that appear within the scope of camelContext are implicitly resolved by looking up the blueprint property placeholder mechanism. For example, consider the following route defined in an OSGi blueprint file, where the last endpoint in the route is defined by the property placeholder, {{result}} : The blueprint property placeholder mechanism is initialized by creating a cm:property-placeholder bean. In the preceding example, the cm:property-placeholder bean is associated with the camel.blueprint persistent ID, where a persistent ID is the standard way of referencing a group of related properties from the OSGi Configuration Admin service. In other words, the cm:property-placeholder bean provides access to all of the properties defined under the camel.blueprint persistent ID. It is also possible to specify default values for some of the properties (using the nested cm:property elements). In the context of blueprint, the Apache Camel placeholder mechanism searches for an instance of cm:property-placeholder in the bean registry. If it finds such an instance, it automatically integrates the Apache Camel placeholder mechanism, so that placeholders like, {{result}} , are resolved by looking up the key in the blueprint property placeholder mechanism (in this example, through the myblueprint.placeholder bean). Note The default blueprint placeholder syntax (accessing the blueprint properties directly) is USD{ Key } . Hence, outside the scope of a camelContext element, the placeholder syntax you must use is USD{ Key } . Whereas, inside the scope of a camelContext element, the placeholder syntax you must use is {{ Key }} . Explicit blueprint integration If you want to have more control over where the Apache Camel property placeholder mechanism finds its properties, you can define a propertyPlaceholder element and specify the resolver locations explicitly. For example, consider the following blueprint configuration, which differs from the example in that it creates an explicit propertyPlaceholder instance: In the preceding example, the propertyPlaceholder element specifies explicitly which cm:property-placeholder bean to use by setting the location to blueprint:myblueprint.placeholder . That is, the blueprint: resolver explicitly references the ID, myblueprint.placeholder , of the cm:property-placeholder bean. This style of configuration is useful, if there is more than one cm:property-placeholder bean defined in the blueprint file and you need to specify which one to use. It also makes it possible to source properties from multiple locations, by specifying a comma-separated list of locations. For example, if you wanted to look up properties both from the cm:property-placeholder bean and from the properties file, myproperties.properties , on the classpath, you could define the propertyPlaceholder element as follows: Integration with Spring property placeholders If you define your Apache Camel application using XML DSL in a Spring XML file, you can integrate the Apache Camel property placeholder mechanism with Spring property placeholder mechanism by declaring a Spring bean of type, org.apache.camel.spring.spi.BridgePropertyPlaceholderConfigurer . Define a BridgePropertyPlaceholderConfigurer , which replaces both Apache Camel's propertyPlaceholder element and Spring's ctx:property-placeholder element in the Spring XML file. You can then refer to the configured properties using either the Spring USD{ PropName } syntax or the Apache Camel {{ PropName }} syntax. For example, defining a bridge property placeholder that reads its property settings from the cheese.properties file: Note Alternatively, you can set the location attribute of the BridgePropertyPlaceholderConfigurer to point at a Spring properties file. The Spring properties file syntax is fully supported. 2.8. Threading Model Java thread pool API The Apache Camel threading model is based on the powerful Java concurrency API, Package java.util.concurrent , that first became available in Sun's JDK 1.5. The key interface in this API is the ExecutorService interface, which represents a thread pool. Using the concurrency API, you can create many different kinds of thread pool, covering a wide range of scenarios. Apache Camel thread pool API The Apache Camel thread pool API builds on the Java concurrency API by providing a central factory (of org.apache.camel.spi.ExecutorServiceManager type) for all of the thread pools in your Apache Camel application. Centralising the creation of thread pools in this way provides several advantages, including: Simplified creation of thread pools, using utility classes. Integrating thread pools with graceful shutdown. Threads automatically given informative names, which is beneficial for logging and management. Component threading model Some Apache Camel components - such as SEDA, JMS, and Jetty - are inherently multi-threaded. These components have all been implemented using the Apache Camel threading model and thread pool API. If you are planning to implement your own Apache Camel component, it is recommended that you integrate your threading code with the Apache Camel threading model. For example, if your component needs a thread pool, it is recommended that you create it using the CamelContext's ExecutorServiceManager object. Processor threading model Some of the standard processors in Apache Camel create their own thread pool by default. These threading-aware processors are also integrated with the Apache Camel threading model and they provide various options that enable you to customize the thread pools that they use. Table 2.8, "Processor Threading Options" shows the various options for controlling and setting thread pools on the threading-aware processors built-in to Apache Camel. Table 2.8. Processor Threading Options Processor Java DSL XML DSL aggregate multicast recipientList split threads wireTap threads DSL options The threads processor is a general-purpose DSL command, which you can use to introduce a thread pool into a route. It supports the following options to customize the thread pool: poolSize() Minimum number of threads in the pool (and initial pool size). maxPoolSize() Maximum number of threads in the pool. keepAliveTime() If any threads are idle for longer than this period of time (specified in seconds), they are terminated. timeUnit() Time unit for keep alive, specified using the java.util.concurrent.TimeUnit type. maxQueueSize() Maximum number of pending tasks that this thread pool can store in its incoming task queue. rejectedPolicy() Specifies what course of action to take, if the incoming task queue is full. See Table 2.10, "Thread Pool Builder Options" Note The preceding thread pool options are not compatible with the executorServiceRef option (for example, you cannot use these options to override the settings in the thread pool referenced by an executorServiceRef option). Apache Camel validates the DSL to enforce this. Creating a default thread pool To create a default thread pool for one of the threading-aware processors, enable the parallelProcessing option, using the parallelProcessing() sub-clause, in the Java DSL, or the parallelProcessing attribute, in the XML DSL. For example, in the Java DSL, you can invoke the multicast processor with a default thread pool (where the thread pool is used to process the multicast destinations concurrently) as follows: You can define the same route in XML DSL as follows Default thread pool profile settings The default thread pools are automatically created by a thread factory that takes its settings from the default thread pool profile . The default thread pool profile has the settings shown in Table 2.9, "Default Thread Pool Profile Settings" (assuming that these settings have not been modified by the application code). Table 2.9. Default Thread Pool Profile Settings Thread Option Default Value maxQueueSize 1000 poolSize 10 maxPoolSize 20 keepAliveTime 60 (seconds) rejectedPolicy CallerRuns Changing the default thread pool profile It is possible to change the default thread pool profile settings, so that all subsequent default thread pools will be created with the custom settings. You can change the profile either in Java or in Spring XML. For example, in the Java DSL, you can customize the poolSize option and the maxQueueSize option in the default thread pool profile, as follows: In the XML DSL, you can customize the default thread pool profile, as follows: Note that it is essential to set the defaultProfile attribute to true in the preceding XML DSL example, otherwise the thread pool profile would be treated like a custom thread pool profile (see the section called "Creating a custom thread pool profile" ), instead of replacing the default thread pool profile. Customizing a processor's thread pool It is also possible to specify the thread pool for a threading-aware processor more directly, using either the executorService or executorServiceRef options (where these options are used instead of the parallelProcessing option). There are two approaches you can use to customize a processor's thread pool, as follows: Specify a custom thread pool - explicitly create an ExecutorService (thread pool) instance and pass it to the executorService option. Specify a custom thread pool profile - create and register a custom thread pool factory. When you reference this factory using the executorServiceRef option, the processor automatically uses the factory to create a custom thread pool instance. When you pass a bean ID to the executorServiceRef option, the threading-aware processor first tries to find a custom thread pool with that ID in the registry. If no thread pool is registered with that ID, the processor then attempts to look up a custom thread pool profile in the registry and uses the custom thread pool profile to instantiate a custom thread pool. Creating a custom thread pool A custom thread pool can be any thread pool of java.util.concurrent.ExecutorService type. The following approaches to creating a thread pool instance are recommended in Apache Camel: Use the org.apache.camel.builder.ThreadPoolBuilder utility to build the thread pool class. Use the org.apache.camel.spi.ExecutorServiceManager instance from the current CamelContext to create the thread pool class. Ultimately, there is not much difference between the two approaches, because the ThreadPoolBuilder is actually defined using the ExecutorServiceManager instance. Normally, the ThreadPoolBuilder is preferred, because it offers a simpler approach. But there is at least one kind of thread (the ScheduledExecutorService ) that can only be created by accessing the ExecutorServiceManager instance directly. Table 2.10, "Thread Pool Builder Options" shows the options supported by the ThreadPoolBuilder class, which you can set when defining a new custom thread pool. Table 2.10. Thread Pool Builder Options Builder Option Description maxQueueSize() Sets the maximum number of pending tasks that this thread pool can store in its incoming task queue. A value of -1 specifies an unbounded queue. Default value is taken from default thread pool profile. poolSize() Sets the minimum number of threads in the pool (this is also the initial pool size). Default value is taken from default thread pool profile. maxPoolSize() Sets the maximum number of threads that can be in the pool. Default value is taken from default thread pool profile. keepAliveTime() If any threads are idle for longer than this period of time (specified in seconds), they are terminated. This allows the thread pool to shrink when the load is light. Default value is taken from default thread pool profile. rejectedPolicy() Specifies what course of action to take, if the incoming task queue is full. You can specify four possible values: CallerRuns (Default value) Gets the caller thread to run the latest incoming task. As a side effect, this option prevents the caller thread from receiving any more tasks until it has finished processing the latest incoming task. Abort Aborts the latest incoming task by throwing an exception. Discard Quietly discards the latest incoming task. DiscardOldest Discards the oldest unhandled task and then attempts to enqueue the latest incoming task in the task queue. build() Finishes building the custom thread pool and registers the new thread pool under the ID specified as the argument to build() . In Java DSL, you can define a custom thread pool using the ThreadPoolBuilder , as follows: Instead of passing the object reference, customPool , directly to the executorService() option, you can look up the thread pool in the registry, by passing its bean ID to the executorServiceRef() option, as follows: In XML DSL, you access the ThreadPoolBuilder using the threadPool element. You can then reference the custom thread pool using the executorServiceRef attribute to look up the thread pool by ID in the Spring registry, as follows: Creating a custom thread pool profile If you have many custom thread pool instances to create, you might find it more convenient to define a custom thread pool profile, which acts as a factory for thread pools. Whenever you reference a thread pool profile from a threading-aware processor, the processor automatically uses the profile to create a new thread pool instance. You can define a custom thread pool profile either in Java DSL or in XML DSL. For example, in Java DSL you can create a custom thread pool profile with the bean ID, customProfile , and reference it from within a route, as follows: In XML DSL, use the threadPoolProfile element to create a custom pool profile (where you let the defaultProfile option default to false , because this is not a default thread pool profile). You can create a custom thread pool profile with the bean ID, customProfile , and reference it from within a route, as follows: Sharing a thread pool between components Some of the standard poll-based components - such as File and FTP - allow you to specify the thread pool to use. This makes it possible for different components to share the same thread pool, reducing the overall number of threads in the JVM. For example, the see File2 in the Apache Camel Component Reference Guide . and the Ftp2 in the Apache Camel Component Reference Guide both expose the scheduledExecutorService property, which you can use to specify the component's ExecutorService object. Customizing thread names To make the application logs more readable, it is often a good idea to customize the thread names (which are used to identify threads in the log). To customize thread names, you can configure the thread name pattern by calling the setThreadNamePattern method on the ExecutorServiceStrategy class or the ExecutorServiceManager class. Alternatively, an easier way to set the thread name pattern is to set the threadNamePattern property on the CamelContext object. The following placeholders can be used in a thread name pattern: #camelId# The name of the current CamelContext . #counter# A unique thread identifier, implemented as an incrementing counter. #name# The regular Camel thread name. #longName# The long thread name - which can include endpoint parameters and so on. The following is a typical example of a thread name pattern: The following example shows how to set the threadNamePattern attribute on a Camel context using XML DSL: 2.9. Controlling Start-Up and Shutdown of Routes Overview By default, routes are automatically started when your Apache Camel application (as represented by the CamelContext instance) starts up and routes are automatically shut down when your Apache Camel application shuts down. For non-critical deployments, the details of the shutdown sequence are usually not very important. But in a production environment, it is often crucial that existing tasks should run to completion during shutdown, in order to avoid data loss. You typically also want to control the order in which routes shut down, so that dependencies are not violated (which would prevent existing tasks from running to completion). For this reason, Apache Camel provides a set of features to support graceful shutdown of applications. Graceful shutdown gives you full control over the stopping and starting of routes, enabling you to control the shutdown order of routes and enabling current tasks to run to completion. Setting the route ID It is good practice to assign a route ID to each of your routes. As well as making logging messages and management features more informative, the use of route IDs enables you to apply greater control over the stopping and starting of routes. For example, in the Java DSL, you can assign the route ID, myCustomerRouteId , to a route by invoking the routeId() command as follows: In the XML DSL, set the route element's id attribute, as follows: Disabling automatic start-up of routes By default, all of the routes that the CamelContext knows about at start time will be started automatically. If you want to control the start-up of a particular route manually, however, you might prefer to disable automatic start-up for that route. To control whether a Java DSL route starts up automatically, invoke the autoStartup command, either with a boolean argument ( true or false ) or a String argument ( true or false ). For example, you can disable automatic start-up of a route in the Java DSL, as follows: You can disable automatic start-up of a route in the XML DSL by setting the autoStartup attribute to false on the route element, as follows: Manually starting and stopping routes You can manually start or stop a route at any time in Java by invoking the startRoute() and stopRoute() methods on the CamelContext instance. For example, to start the route having the route ID, nonAuto , invoke the startRoute() method on the CamelContext instance, context , as follows: To stop the route having the route ID, nonAuto , invoke the stopRoute() method on the CamelContext instance, context , as follows: Startup order of routes By default, Apache Camel starts up routes in a non-deterministic order. In some applications, however, it can be important to control the startup order. To control the startup order in the Java DSL, use the startupOrder() command, which takes a positive integer value as its argument. The route with the lowest integer value starts first, followed by the routes with successively higher startup order values. For example, the first two routes in the following example are linked together through the seda:buffer endpoint. You can ensure that the first route segment starts after the second route segment by assigning startup orders (2 and 1 respectively), as follows: Example 2.5. Startup Order in Java DSL Or in Spring XML, you can achieve the same effect by setting the route element's startupOrder attribute, as follows: Example 2.6. Startup Order in XML DSL Each route must be assigned a unique startup order value. You can choose any positive integer value that is less than 1000. Values of 1000 and over are reserved for Apache Camel, which automatically assigns these values to routes without an explicit startup value. For example, the last route in the preceding example would automatically be assigned the startup value, 1000 (so it starts up after the first two routes). Shutdown sequence When a CamelContext instance is shutting down, Apache Camel controls the shutdown sequence using a pluggable shutdown strategy . The default shutdown strategy implements the following shutdown sequence: Routes are shut down in the reverse of the start-up order. Normally, the shutdown strategy waits until the currently active exchanges have finshed processing. The treatment of running tasks is configurable, however. Overall, the shutdown sequence is bound by a timeout (default, 300 seconds). If the shutdown sequence exceeds this timeout, the shutdown strategy will force shutdown to occur, even if some tasks are still running. Shutdown order of routes Routes are shut down in the reverse of the start-up order. That is, when a start-up order is defined using the startupOrder() command (in Java DSL) or startupOrder attribute (in XML DSL), the first route to shut down is the route with the highest integer value assigned by the start-up order and the last route to shut down is the route with the lowest integer value assigned by the start-up order. For example, in Example 2.5, "Startup Order in Java DSL" , the first route segment to be shut down is the route with the ID, first , and the second route segment to be shut down is the route with the ID, second . This example illustrates a general rule, which you should observe when shutting down routes: the routes that expose externally-accessible consumer endpoints should be shut down first , because this helps to throttle the flow of messages through the rest of the route graph. Note Apache Camel also provides the option shutdownRoute(Defer) , which enables you to specify that a route must be amongst the last routes to shut down (overriding the start-up order value). But you should rarely ever need this option. This option was mainly needed as a workaround for earlier versions of Apache Camel (prior to 2.3), for which routes would shut down in the same order as the start-up order. Shutting down running tasks in a route If a route is still processing messages when the shutdown starts, the shutdown strategy normally waits until the currently active exchange has finished processing before shutting down the route. This behavior can be configured on each route using the shutdownRunningTask option, which can take either of the following values: ShutdownRunningTask.CompleteCurrentTaskOnly (Default) Usually, a route operates on just a single message at a time, so you can safely shut down the route after the current task has completed. ShutdownRunningTask.CompleteAllTasks Specify this option in order to shut down batch consumers gracefully. Some consumer endpoints (for example, File, FTP, Mail, iBATIS, and JPA) operate on a batch of messages at a time. For these endpoints, it is more appropriate to wait until all of the messages in the current batch have completed. For example, to shut down a File consumer endpoint gracefully, you should specify the CompleteAllTasks option, as shown in the following Java DSL fragment: The same route can be defined in the XML DSL as follows: Shutdown timeout The shutdown timeout has a default value of 300 seconds. You can change the value of the timeout by invoking the setTimeout() method on the shutdown strategy. For example, you can change the timeout value to 600 seconds, as follows: Integration with custom components If you are implementing a custom Apache Camel component (which also inherits from the org.apache.camel.Service interface), you can ensure that your custom code receives a shutdown notification by implementing the org.apache.camel.spi.ShutdownPrepared interface. This gives the component an opportunity execute custom code in preparation for shutdown. 2.9.1. RouteIdFactory Based on the consumer endpoints, you can add RouteIdFactory that can assign route ids with the logical names. For example, when using the routes with seda or direct components as route inputs, then you may want to use their names as the route id, such as, direct:foo- foo seda:bar- bar jms:orders- orders Instead of using auto-assigned names, you can use the NodeIdFactory that can assign logical names for routes. Also, you can use the context-path of route URL as the name. For example, execute the following to use the RouteIDFactory : Note It is possible to get the custom route id from rest endpoints. 2.10. Scheduled Route Policy 2.10.1. Overview of Scheduled Route Policies Overview A scheduled route policy can be used to trigger events that affect a route at runtime. In particular, the implementations that are currently available enable you to start, stop, suspend, or resume a route at any time (or times) specified by the policy. Scheduling tasks The scheduled route policies are capable of triggering the following kinds of event: Start a route - start the route at the time (or times) specified. This event only has an effect, if the route is currently in a stopped state, awaiting activation. Stop a route - stop the route at the time (or times) specified. This event only has an effect, if the route is currently active. Suspend a route - temporarily de-activate the consumer endpoint at the start of the route (as specified in from() ). The rest of the route is still active, but clients will not be able to send new messages into the route. Resume a route - re-activate the consumer endpoint at the start of the route, returning the route to a fully active state. Quartz component The Quartz component is a timer component based on Terracotta's Quartz , which is an open source implementation of a job scheduler. The Quartz component provides the underlying implementation for both the simple scheduled route policy and the cron scheduled route policy. 2.10.2. Simple Scheduled Route Policy Overview The simple scheduled route policy is a route policy that enables you to start, stop, suspend, and resume routes, where the timing of these events is defined by providing the time and date of an initial event and (optionally) by specifying a certain number of subsequent repititions. To define a simple scheduled route policy, create an instance of the following class: Dependency The simple scheduled route policy depends on the Quartz component, camel-quartz . For example, if you are using Maven as your build system, you would need to add a dependency on the camel-quartz artifact. Java DSL example Example 2.7, "Java DSL Example of Simple Scheduled Route" shows how to schedule a route to start up using the Java DSL. The initial start time, startTime , is defined to be 3 seconds after the current time. The policy is also configured to start the route a second time, 3 seconds after the initial start time, which is configured by setting routeStartRepeatCount to 1 and routeStartRepeatInterval to 3000 milliseconds. In Java DSL, you attach the route policy to the route by calling the routePolicy() DSL command in the route. Example 2.7. Java DSL Example of Simple Scheduled Route Note You can specify multiple policies on the route by calling routePolicy() with multiple arguments. XML DSL example Example 2.8, "XML DSL Example of Simple Scheduled Route" shows how to schedule a route to start up using the XML DSL. In XML DSL, you attach the route policy to the route by setting the routePolicyRef attribute on the route element. Example 2.8. XML DSL Example of Simple Scheduled Route Note You can specify multiple policies on the route by setting the value of routePolicyRef as a comma-separated list of bean IDs. Defining dates and times The initial times of the triggers used in the simple scheduled route policy are specified using the java.util.Date type.The most flexible way to define a Date instance is through the java.util.GregorianCalendar class. Use the convenient constructors and methods of the GregorianCalendar class to define a date and then obtain a Date instance by calling GregorianCalendar.getTime() . For example, to define the time and date for January 1, 2011 at noon, call a GregorianCalendar constructor as follows: The GregorianCalendar class also supports the definition of times in different time zones. By default, it uses the local time zone on your computer. Graceful shutdown When you configure a simple scheduled route policy to stop a route, the route stopping algorithm is automatically integrated with the graceful shutdown procedure (see Section 2.9, "Controlling Start-Up and Shutdown of Routes" ). This means that the task waits until the current exchange has finished processing before shutting down the route. You can set a timeout, however, that forces the route to stop after the specified time, irrespective of whether or not the route has finished processing the exchange. Logging Inflight Exchanges on Timeout If a graceful shutdown fails to shutdown cleanly within the given timeout period, then Apache Camel performs more aggressive shut down. It forces routes, threadpools etc to shutdown. After the timeout, Apache Camel logs information about the current inflight exchanges. It logs the origin of the exchange and current route of exchange. For example, the log below shows that there is one inflight exchange, that origins from route1 and is currently on the same route1 at the delay1 node. During graceful shutdown, If you enable the DEBUG logging level on org.apache.camel.impl.DefaultShutdownStrategy , then it logs the same inflight exchange information. If you do not want to see these logs, you can turn this off by setting the option logInflightExchangesOnTimeout to false. Scheduling tasks You can use a simple scheduled route policy to define one or more of the following scheduling tasks: Starting a route Stopping a route Suspending a route Resuming a route Starting a route The following table lists the parameters for scheduling one or more route starts. Parameter Type Default Description routeStartDate java.util.Date None Specifies the date and time when the route is started for the first time. routeStartRepeatCount int 0 When set to a non-zero value, specifies how many times the route should be started. routeStartRepeatInterval long 0 Specifies the time interval between starts, in units of milliseconds. Stopping a route The following table lists the parameters for scheduling one or more route stops. Parameter Type Default Description routeStopDate java.util.Date None Specifies the date and time when the route is stopped for the first time. routeStopRepeatCount int 0 When set to a non-zero value, specifies how many times the route should be stopped. routeStopRepeatInterval long 0 Specifies the time interval between stops, in units of milliseconds. routeStopGracePeriod int 10000 Specifies how long to wait for the current exchange to finish processing (grace period) before forcibly stopping the route. Set to 0 for an infinite grace period. routeStopTimeUnit long TimeUnit.MILLISECONDS Specifies the time unit of the grace period. Suspending a route The following table lists the parameters for scheduling the suspension of a route one or more times. Parameter Type Default Description routeSuspendDate java.util.Date None Specifies the date and time when the route is suspended for the first time. routeSuspendRepeatCount int 0 When set to a non-zero value, specifies how many times the route should be suspended. routeSuspendRepeatInterval long 0 Specifies the time interval between suspends, in units of milliseconds. Resuming a route The following table lists the parameters for scheduling the resumption of a route one or more times. Parameter Type Default Description routeResumeDate java.util.Date None Specifies the date and time when the route is resumed for the first time. routeResumeRepeatCount int 0 When set to a non-zero value, specifies how many times the route should be resumed. routeResumeRepeatInterval long 0 Specifies the time interval between resumes, in units of milliseconds. 2.10.3. Cron Scheduled Route Policy Overview The cron scheduled route policy is a route policy that enables you to start, stop, suspend, and resume routes, where the timing of these events is specified using cron expressions. To define a cron scheduled route policy, create an instance of the following class: Dependency The simple scheduled route policy depends on the Quartz component, camel-quartz . For example, if you are using Maven as your build system, you would need to add a dependency on the camel-quartz artifact. Java DSL example Example 2.9, "Java DSL Example of a Cron Scheduled Route" shows how to schedule a route to start up using the Java DSL. The policy is configured with the cron expression, \*/3 * * * * ? , which triggers a start event every 3 seconds. In Java DSL, you attach the route policy to the route by calling the routePolicy() DSL command in the route. Example 2.9. Java DSL Example of a Cron Scheduled Route Note You can specify multiple policies on the route by calling routePolicy() with multiple arguments. XML DSL example Example 2.10, "XML DSL Example of a Cron Scheduled Route" shows how to schedule a route to start up using the XML DSL. In XML DSL, you attach the route policy to the route by setting the routePolicyRef attribute on the route element. Example 2.10. XML DSL Example of a Cron Scheduled Route Note You can specify multiple policies on the route by setting the value of routePolicyRef as a comma-separated list of bean IDs. Defining cron expressions The cron expression syntax has its origins in the UNIX cron utility, which schedules jobs to run in the background on a UNIX system. A cron expression is effectively a syntax for wildcarding dates and times that enables you to specify either a single event or multiple events that recur periodically. A cron expression consists of 6 or 7 fields in the following order: The Year field is optional and usually omitted, unless you want to define an event that occurs once and once only. Each field consists of a mixture of literals and special characters. For example, the following cron expression specifies an event that fires once every day at midnight: The * character is a wildcard that matches every value of a field. Hence, the preceding expression matches every day of every month. The ? character is a dummy placeholder that means *ignore this field*. It always appears either in the DayOfMonth field or in the DayOfWeek field, because it is not logically consistent to specify both of these fields at the same time. For example, if you want to schedule an event that fires once a day, but only from Monday to Friday, use the following cron expression: Where the hyphen character specifies a range, MON-FRI . You can also use the forward slash character, / , to specify increments. For example, to specify that an event fires every 5 minutes, use the following cron expression: For a full explanation of the cron expression syntax, see the Wikipedia article on CRON expressions . Scheduling tasks You can use a cron scheduled route policy to define one or more of the following scheduling tasks: Starting a route Stopping a route Suspending a route Resuming a route Starting a route The following table lists the parameters for scheduling one or more route starts. Parameter Type Default Description routeStartString String None Specifies a cron expression that triggers one or more route start events. Stopping a route The following table lists the parameters for scheduling one or more route stops. Parameter Type Default Description routeStopTime String None Specifies a cron expression that triggers one or more route stop events. routeStopGracePeriod int 10000 Specifies how long to wait for the current exchange to finish processing (grace period) before forcibly stopping the route. Set to 0 for an infinite grace period. routeStopTimeUnit long TimeUnit.MILLISECONDS Specifies the time unit of the grace period. Suspending a route The following table lists the parameters for scheduling the suspension of a route one or more times. Parameter Type Default Description routeSuspendTime String None Specifies a cron expression that triggers one or more route suspend events. Resuming a route The following table lists the parameters for scheduling the resumption of a route one or more times. Parameter Type Default Description routeResumeTime String None Specifies a cron expression that triggers one or more route resume events. 2.10.4. Route Policy Factory Using Route Policy Factory Available as of Camel 2.14 If you want to use a route policy for every route, you can use a org.apache.camel.spi.RoutePolicyFactory as a factory for creating a RoutePolicy instance for each route. This can be used when you want to use the same kind of route policy for every route. Then you need to only configure the factory once, and every route created will have the policy assigned. There is API on CamelContext to add a factory, as shown below: From XML DSL you only define a <bean> with the factory The factory contains the createRoutePolicy method for creating route policies. Note you can have as many route policy factories as you want. Just call the addRoutePolicyFactory again, or declare the other factories as <bean> in XML. 2.11. Reloading Camel Routes In Apache Camel 2.19 release, you can enable the live reload of your camel XML routes, which will trigger a reload, when you save the XML file from your editor. You can use this feature when using: Camel standalone with Camel Main class Camel Spring Boot From the camel:run maven plugin However, you can also enable this manually, by setting a ReloadStrategy on the CamelContext and by providing your own custom strategies. 2.12. Camel Maven Plugin The Camel Maven Plugin supports the following goals: camel:run - To run your Camel application camel:validate - To validate your source code for invalid Camel endpoint URIs camel:route-coverage - To report the coverage of your Camel routes after unit testing 2.12.1. camel:run The camel:run goal of the Camel Maven Plugin is used to run your Camel Spring configurations in a forked JVM from Maven. A good example application to get you started is the Spring Example. This makes it very easy to spin up and test your routing rules without having to write a main(... ) method; it also lets you create multiple jars to host different sets of routing rules and easily test them independently. The Camel Maven plugin compiles the source code in the maven project, then boots up a Spring ApplicationContext using the XML configuration files on the classpath at META-INF/spring/*.xml . If you want to boot up your Camel routes a little faster, you can try the camel:embedded instead. 2.12.1.1. Options The Camel Maven plugin run goal supports the following options which can be configured from the command line (use -D syntax), or defined in the pom.xml file in the <configuration> tag. Parameter Default Value Description duration -1 Sets the time duration (seconds) that the application runs for before terminating. A value ⇐ 0 will run forever. durationIdle -1 Sets the idle time duration (seconds) duration that the application can be idle for before terminating. A value ⇐ 0 will run forever. durationMaxMessages -1 Sets the duration of maximum number of messages that the application processes before terminating. logClasspath false Whether to log the classpath when starting 2.12.1.2. Running OSGi Blueprint The camel:run plugin also supports running a Blueprint application, and by default it scans for OSGi blueprint files in OSGI-INF/blueprint/*.xml . You would need to configure the camel:run plugin to use blueprint, by setting useBlueprint to true as shown below: <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <configuration> <useBlueprint>true</useBlueprint> </configuration> </plugin> This allows you to boot up any Blueprint services you wish, regardless of whether they are Camel-related, or any other Blueprint. The camel:run goal can auto detect if camel-blueprint is on the classpath or there are blueprint XML files in the project, and therefore you no longer have to configure the useBlueprint option. 2.12.1.3. Using limited Blueprint container We use the Felix Connector project as the blueprint container. This project is not a full fledged blueprint container. For that you can use Apache Karaf or Apache ServiceMix. You can use the applicationContextUri configuration to specify an explicit blueprint XML file, such as: <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <configuration> <useBlueprint>true</useBlueprint> <applicationContextUri>myBlueprint.xml</applicationContextUri> <!-- ConfigAdmin options which have been added since Camel 2.12.0 --> <configAdminPid>test</configAdminPid> <configAdminFileName>/user/test/etc/test.cfg</configAdminFileName> </configuration> </plugin> The applicationContextUri loads the file from the classpath, so in the example above the myBlueprint.xml file must be in the root of the classpath. The configAdminPid is the pid name which will be used as the pid name for configuration admin service when loading the persistence properties file. The configAdminFileName is the file name which will be used to load the configuration admin service properties file. 2.12.1.4. Running CDI The camel:run plugin also supports running a CDI application. This allows you to boot up any CDI services you wish, whether they are Camel-related, or any other CDI enabled services. You should add the CDI container of your choice (e.g. Weld or OpenWebBeans) to the dependencies of the camel-maven-plugin such as in this example. From the source of Camel you can run a CDI example as follows: 2.12.1.5. Logging the classpath You can configure whether the classpath should be logged when camel:run executes. You can enable this in the configuration using: <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <configuration> <logClasspath>true</logClasspath> </configuration> </plugin> 2.12.1.6. Using live reload of XML files You can configure the plugin to scan for XML file changes and trigger a reload of the Camel routes which are contained in those XML files. <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <configuration> <fileWatcherDirectory>src/main/resources/META-INF/spring</fileWatcherDirectory> </configuration> </plugin> Then the plugin watches this directory. This allows you to edit the source code from your editor and save the file, and have the running Camel application utilize those changes. Note that only the changes of Camel routes, for example, <routes> , or <route> which are supported. You cannot change Spring or OSGi Blueprint <bean> elements. 2.12.2. camel:validate For source code validation of the following Camel features: endpoint URIs simple expressions or predicates duplicate route ids Then you can run the camel:validate goal from the command line or from within your Java editor such as IDEA or Eclipse. You can also enable the plugin to automatic run as part of the build to catch these errors. <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <goals> <goal>validate</goal> </goals> </execution> </executions> </plugin> The phase determines when the plugin runs. In the sample above the phase is process-classes which runs after the compilation of the main source code. The maven plugin can also be configured to validate the test source code, which means that the phase should be changed accordingly to process-test-classes as shown below: <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <executions> <execution> <configuration> <includeTest>true</includeTest> </configuration> <phase>process-test-classes</phase> <goals> <goal>validate</goal> </goals> </execution> </executions> </plugin> 2.12.2.1. Running the goal on any Maven project You can also run the validate goal on any Maven project without having to add the plugin to the pom.xml file. Doing so requires to specify the plugin using its fully qualified name. For example to run the goal on the camel-example-cdi from Apache Camel, you can run which then runs and outputs the following: The validation passed, and 4 endpoints was validated. Now suppose we made a typo in one of the Camel endpoint URIs in the source code, such as: @Uri("timer:foo?period=5000") is changed to include a typo error in the period option @Uri("timer:foo?perid=5000") And when running the validate goal again reports the following: 2.12.2.2. Options The Camel Maven plugin validate goal supports the following options which can be configured from the command line (use -D syntax), or defined in the pom.xml file in the <configuration> tag. Parameter Default Value Description downloadVersion true Whether to allow downloading Camel catalog version from the internet. This is needed if the project uses a different Camel version than this plugin is using by default. failOnError false Whether to fail if invalid Camel endpoints were found. By default the plugin logs the errors at WARN level. logUnparseable false Whether to log endpoint URI which was un-parsable and therefore not possible to validate. includeJava true Whether to include Java files to be validated for invalid Camel endpoints. includeXml true Whether to include XML files to be validated for invalid Camel endpoints. includeTest false Whether to include test source code. includes To filter the names of java and xml files to only include files matching any of the given list of patterns (wildcard and regular expression). Multiple values can be separated by comma. excludes To filter the names of java and xml files to exclude files matching any of the given list of patterns (wildcard and regular expression). Multiple values can be separated by comma. ignoreUnknownComponent true Whether to ignore unknown components. ignoreIncapable true Whether to ignore incapable of parsing the endpoint URI or simple expression. ignoreLenientProperties true Whether to ignore components that uses lenient properties. When this is true, then the URI validation is stricter but would fail on properties that are not part of the component but in the URI because of using lenient properties. For example using the HTTP components to provide query parameters in the endpoint URI. ignoreDeprecated true Camel 2.23 Whether to ignore deprecated options being used in the endpoint URI. duplicateRouteId true Camel 2.20 Whether to validate for duplicate route ids. Route ids should be unique and if there are duplicates then Camel will fail to startup. directOrSedaPairCheck true Camel 2.23 Whether to validate direct/seda endpoints sending to non existing consumers. showAll false Whether to show all endpoints and simple expressions (both invalid and valid). For example to turn off ignoring usage of deprecated options from the command line, you can run: Note that you must prefix the -D command argument with camel. , eg camel.ignoreDeprecated as the option name. 2.12.2.3. Validating Endpoints using include test If you have a Maven project then you can run the plugin to validate the endpoints in the unit test source code as well. You can pass in the options using -D style as shown: 2.12.3. camel:route-coverage For generating a report of the coverage of your Camel routes from unit testing. You can use this to know which parts of your Camel routes has been used or not. 2.12.3.1. Enabling route coverage You can enable route coverage while running unit tests either by: setting global JVM system property enabling for all test classes using @EnableRouteCoverage annotation per test class if using camel-test-spring module overriding isDumpRouteCoverage method per test class if using camel-test module 2.12.3.2. Enabling Route Coverage by using JVM system property You can turn on the JVM system property CamelTestRouteCoverage to enable route coverage for all tests cases. This can be done either in the configuration of the maven-surefire-plugin : <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <systemPropertyVariables> <CamelTestRouteCoverage>true</CamelTestRouteCoverage> </systemPropertyVariables> </configuration> </plugin> Or from the command line when running tests: 2.12.3.3. Enabling via @EnableRouteCoverage annotation You can enable route coverage in the unit tests classes by adding the @EnableRouteCoverage annotation to the test class if you are testing using camel-test-spring : @RunWith(CamelSpringBootRunner.class) @SpringBootTest(classes = SampleCamelApplication.class) @EnableRouteCoverage public class FooApplicationTest { 2.12.3.4. Enabling via isDumpRouteCoverage method However if you are using camel-test and your unit tests are extending CamelTestSupport then you can turn on route coverage as shown: @Override public boolean isDumpRouteCoverage() { return true; } Routes that can be coveraged under RouteCoverage method must have an unique id assigned, in other words you cannot use anonymous routes. You do this using routeId in Java DSL: from("jms:queue:cheese").routeId("cheesy") .to("log:foo") ... And in XML DSL you just assign the route id via the id attribute <route id="cheesy"> <from uri="jms:queue:cheese"/> <to uri="log:foo"/> ... </route> 2.12.3.5. Generating route coverage report TO generate the route coverage report, run the unit test with: You can then run the goal to report the route coverage as follows: This reports which routes has missing route coverage with precise source code line reporting: Here we can see that the 2nd last line with to has 0 in the count column, and therefore is not covered. We can also see that this is one line 34 in the source code file, which is in the my-camel.xml XML file. 2.12.3.6. Options The Camel Maven plugin coverage goal supports the following options which can be configured from the command line (use -D syntax), or defined in the pom.xml file in the <configuration> tag. Parameter Default Value Description failOnError false Whether to fail if any of the routes does not have 100% coverage. includeTest false Whether to include test source code. includes To filter the names of java and xml files to only include files matching any of the given list of patterns (wildcard and regular expression). Multiple values can be separated by comma. excludes To filter the names of java and xml files to exclude files matching any of the given list of patterns (wildcard and regular expression). Multiple values can be separated by comma. anonymousRoutes false Whether to allow anonymous routes (routes without any route id assigned). By using route id's then its safer to match the route cover data with the route source code. Anonymous routes are less safe to use for route coverage as its harder to know exactly which route that was tested corresponds to which of the routes from the source code. 2.13. Running Apache Camel Standalone When you run camel as a standalone application, it provides the Main class that you can use to run the application and keep it running until the JVM terminates. You can find the MainListener class within the org.apache.camel.main Java package. Following are the components of the Main class: camel-core JAR in the org.apache.camel.Main class camel-spring JAR in the org.apache.camel.spring.Main class The following example shows how you can create and use the Main class from Camel: 2.14. OnCompletion Overview The OnCompletion DSL name is used to define an action that is to take place when a Unit of Work is completed. A Unit of Work is a Camel concept that encompasses an entire exchange. See Section 34.1, "Exchanges" . The onCompletion command has the following features: The scope of the OnCompletion command can be global or per route. A route scope overrides global scope. OnCompletion can be configured to be triggered on success for failure. The onWhen predicate can be used to only trigger the onCompletion in certain situations. You can define whether or not to use a thread pool, though the default is no thread pool. Route Only Scope for onCompletion When an onCompletion DSL is specified on an exchange, Camel spins off a new thread. This allows the original thread to continue without interference from the onCompletion task. A route will only support one onCompletion . In the following example, the onCompletion is triggered whether the exchange completes with success or failure. This is the default action. For XML the format is as follows: To trigger the onCompletion on failure, the onFailureOnly parameter can be used. Similarly, to trigger the onCompletion on success, use the onCompleteOnly parameter. For XML, onFailureOnly and onCompleteOnly are expressed as booleans on the onCompletion tag: Global Scope for onCompletion To define onCompletion for more than just one route: Using onWhen To trigger the onCompletion under certain circumstances, use the onWhen predicate. The following example will trigger the onCompletion when the body of the message contains the word Hello : Using onCompletion with or without a thread pool As of Camel 2.14, onCompletion will not use a thread pool by default. To force the use of a thread pool, either set an executorService or set parallelProcessing to true. For example, in Java DSL, use the following format: For XML the format is: Use the executorServiceRef option to refer to a specific thread pool: Run onCompletion before Consumer Sends Response onCompletion can be run in two modes: AfterConsumer - The default mode which runs after the consumer is finished BeforeConsumer - Runs before the consumer writes a response back to the callee. This allows onCompletion to modify the Exchange, such as adding special headers, or to log the Exchange as a response logger. For example, to add a created by header to the response, use modeBeforeConsumer() as shown below: For XML, set the mode attribute to BeforeConsumer : 2.15. Metrics Overview Available as of Camel 2.14 While Camel provides a lot of existing metrics integration with Codahale metrics has been added for Camel routes. This allows end users to seamless feed Camel routing information together with existing data they are gathering using Codahale metrics. To use the Codahale metrics you will need to: Add camel-metrics component Enable route metrics in XML or Java code Note that performance metrics are only usable if you have a way of displaying them; any kind of monitoring tooling which can integrate with JMX can be used, as the metrics are available over JMX. In addition, the actual data is 100% Codehale JSON. Metrics Route Policy Obtaining Codahale metrics for a single route can be accomplished by defining a MetricsRoutePolicy on a per route basis. From Java create an instance of MetricsRoutePolicy to be assigned as the route's policy. This is shown below: From XML DSL you define a <bean> which is specified as the route's policy; for example: Metrics Route Policy Factory This factory allows one to add a RoutePolicy for each route which exposes route utilization statistics using Codahale metrics. This factory can be used in Java and XML as the examples below demonstrate. From Java you just add the factory to the CamelContext as shown below: And from XML DSL you define a <bean> as follows: From Java code you can get hold of the com.codahale.metrics.MetricRegistry from the org.apache.camel.component.metrics.routepolicy.MetricsRegistryService as shown below: Options The MetricsRoutePolicyFactory and MetricsRoutePolicy supports the following options: Name Default Description durationUnit TimeUnit.MILLISECONDS The unit to use for duration in the metrics reporter or when dumping the statistics as json. jmxDomain org.apache.camel.metrics The JXM domain name. metricsRegistry Allow to use a shared com.codahale.metrics.MetricRegistry . If none is provided then Camel will create a shared instance used by the this CamelContext. prettyPrint false Whether to use pretty print when outputting statistics in json format. rateUnit TimeUnit.SECONDS The unit to use for rate in the metrics reporter or when dumping the statistics as json. useJmx false Whether to report fine grained statistics to JMX by using the com.codahale.metrics.JmxReporter . Notice that if JMX is enabled on CamelContext then a MetricsRegistryService mbean is enlisted under the services type in the JMX tree. That mbean has a single operation to output the statistics using json. Setting useJmx to true is only needed if you want fine grained mbeans per statistics type. 2.16. JMX Naming Overview Apache Camel allows you to customize the name of a CamelContext bean as it appears in JMX, by defining a management name pattern for it. For example, you can customize the name pattern of an XML CamelContext instance, as follows: If you do not explicitly set a name pattern for the CamelContext bean, Apache Camel reverts to a default naming strategy. Default naming strategy By default, the JMX name of a CamelContext bean deployed in an OSGi bundle is equal to the OSGi symbolic name of the bundle. For example, if the OSGi symbolic name is MyCamelBundle , the JMX name would be MyCamelBundle . In cases where there is more than one CamelContext in the bundle, the JMX name is disambiguated by adding a counter value as a suffix. For example, if there are multiple Camel contexts in the MyCamelBundle bundle, the corresponding JMX MBeans are named as follows: Customizing the JMX naming strategy One drawback of the default naming strategy is that you cannot guarantee that a given CamelContext bean will have the same JMX name between runs. If you want to have greater consistency between runs, you can control the JMX name more precisely by defining a JMX name pattern for the CamelContext instances. Specifying a name pattern in Java To specify a name pattern on a CamelContext in Java, call the setNamePattern method, as follows: Specifying a name pattern in XML To specify a name pattern on a CamelContext in XML, set the managementNamePattern attribute on the camelContext element, as follows: Name pattern tokens You can construct a JMX name pattern by mixing literal text with any of the following tokens: Table 2.11. JMX Name Pattern Tokens Token Description #camelId# Value of the id attribute on the CamelContext bean. #name# Same as #camelId# . #counter# An incrementing counter (starting at 1 ). #bundleId# The OSGi bundle ID of the deployed bundle (OSGi only) . #symbolicName# The OSGi symbolic name (OSGi only) . #version# The OSGi bundle version (OSGi only) . Examples Here are some examples of JMX name patterns you could define using the supported tokens: Ambiguous names Because the customised naming pattern overrides the default naming strategy, it is possible to define ambiguous JMX MBean names using this approach. For example: In this case, Apache Camel would fail on start-up and report an MBean already exists exception. You should, therefore, take extra care to ensure that you do not define ambiguous name patterns. 2.17. Performance and Optimization Message copying The allowUseOriginalMessage option default setting is false , to cut down on copies being made of the original message when they are not needed. To enable the allowUseOriginalMessage option use the following commands: Set useOriginalMessage=true on any of the error handlers or on the onException element. In Java application code, set AllowUseOriginalMessage=true , then use the getOriginalMessage method. Note In Camel versions prior to 2.18, the default setting of allowUseOriginalMessage is true.
[ "from(\"activemq:orderQueue\") .setHeader(\"BillingSystem\", xpath(\"/order/billingSystem\")) .to(\"activemq:billingQueue\");", "from(\"activemq:orderQueue\") .transform(constant(\"DummyBody\")) .to(\"activemq:billingQueue\");", "from(\"activemq:userdataQueue\") .to(ExchangePattern.InOut, \"velocity:file:AdressTemplate.vm\") .to(\"activemq:envelopeAddresses\");", "from(\"jetty:http://localhost:8080/foo\") .to(\"cxf:bean:addAccountDetails\") .to(\"cxf:bean:getCreditRating\") .to(\"cxf:bean:processTransaction\");", "from(\"jetty:http://localhost:8080/foo\") .pipeline(\"cxf:bean:addAccountDetails\", \"cxf:bean:getCreditRating\", \"cxf:bean:processTransaction\");", "from(\" URI1 \", \" URI2 \", \" URI3 \").to(\" DestinationUri \");", "from(\" URI1 \").from(\" URI2 \").from(\" URI3 \").to(\" DestinationUri \");", "from(\" URI1 \").to(\" DestinationUri \"); from(\" URI2 \").to(\" DestinationUri \"); from(\" URI3 \").to(\" DestinationUri \");", "from(\"activemq:Nyse\").to(\"direct:mergeTxns\"); from(\"activemq:Nasdaq\").to(\"direct:mergeTxns\"); from(\"direct:mergeTxns\").to(\"activemq:USTxn\");", "from(\"activemq:Nyse\").to(\"seda:mergeTxns\"); from(\"activemq:Nasdaq\").to(\"seda:mergeTxns\"); from(\"seda:mergeTxns\").to(\"activemq:USTxn\");", "from(\"activemq:Nyse\").to(\"vm:mergeTxns\"); from(\"activemq:Nasdaq\").to(\"vm:mergeTxns\");", "from(\"vm:mergeTxns\").to(\"activemq:USTxn\");", "from(\"jms:queue:creditRequests\") .pollEnrich(\"file:src/data/ratings?noop=true\", new GroupedExchangeAggregationStrategy()) .bean(new MergeCreditRequestAndRatings(), \"merge\") .to(\"jms:queue:reformattedRequests\");", "public class MergeCreditRequestAndRatings { public void merge(Exchange ex) { // Obtain the grouped exchange List<Exchange> list = ex.getProperty(Exchange.GROUPED_EXCHANGE, List.class); // Get the exchanges from the grouped exchange Exchange originalEx = list.get(0); Exchange ratingsEx = list.get(1); // Merge the exchanges } }", "// Java public class MyRouteBuilder extends RouteBuilder { public void configure() { onException(ValidationException.class) .to(\"activemq:validationFailed\"); from(\"seda:inputA\") .to(\"validation:foo/bar.xsd\", \"activemq:someQueue\"); from(\"seda:inputB\").to(\"direct:foo\") .to(\"rnc:mySchema.rnc\", \"activemq:anotherQueue\"); } }", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:camel=\"http://camel.apache.org/schema/spring\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <onException> <exception>com.mycompany.ValidationException</exception> <to uri=\"activemq:validationFailed\"/> </onException> <route> <from uri=\"seda:inputA\"/> <to uri=\"validation:foo/bar.xsd\"/> <to uri=\"activemq:someQueue\"/> </route> <route> <from uri=\"seda:inputB\"/> <to uri=\"rnc:mySchema.rnc\"/> <to uri=\"activemq:anotherQueue\"/> </route> </camelContext> </beans>", "onException(ValidationException.class).to(\"activemq:validationFailed\"); onException(java.io.IOException.class).to(\"activemq:ioExceptions\"); onException(Exception.class).to(\"activemq:exceptions\");", "<onException> <exception>com.mycompany.ValidationException</exception> <to uri=\"activemq:validationFailed\"/> </onException> <onException> <exception>java.io.IOException</exception> <to uri=\"activemq:ioExceptions\"/> </onException> <onException> <exception>java.lang.Exception</exception> <to uri=\"activemq:exceptions\"/> </onException>", "onException(ValidationException.class, BuesinessException.class) .to(\"activemq:validationFailed\");", "<onException> <exception>com.mycompany.ValidationException</exception> <exception>com.mycompany.BuesinessException</exception> <to uri=\"activemq:validationFailed\"/> </onException>", "<throwException exceptionType=\"java.lang.IllegalArgumentException\" message=\"USD{body}\"/>", "onException(ValidationException.class) .useOriginalMessage() .to(\"activemq:validationFailed\");", "<onException useOriginalMessage=\"true\"> <exception>com.mycompany.ValidationException</exception> <to uri=\"activemq:validationFailed\"/> </onException>", "onException(ValidationException.class) .maximumRedeliveries(6) .retryAttemptedLogLevel(org.apache.camel.LogginLevel.WARN) .to(\"activemq:validationFailed\");", "<onException useOriginalMessage=\"true\"> <exception>com.mycompany.ValidationException</exception> <redeliveryPolicy maximumRedeliveries=\"6\"/> <to uri=\"activemq:validationFailed\"/> </onException>", "<redeliveryPolicyProfile id=\"redelivPolicy\" maximumRedeliveries=\"6\" retryAttemptedLogLevel=\"WARN\"/> <onException useOriginalMessage=\"true\" redeliveryPolicyRef=\"redelivPolicy\"> <exception>com.mycompany.ValidationException</exception> <to uri=\"activemq:validationFailed\"/> </onException>", "// Java // Here we define onException() to catch MyUserException when // there is a header[user] on the exchange that is not null onException(MyUserException.class) .onWhen(header(\"user\").isNotNull()) .maximumRedeliveries(2) .to(ERROR_USER_QUEUE); // Here we define onException to catch MyUserException as a kind // of fallback when the above did not match. // Noitce: The order how we have defined these onException is // important as Camel will resolve in the same order as they // have been defined onException(MyUserException.class) .maximumRedeliveries(2) .to(ERROR_QUEUE);", "<redeliveryPolicyProfile id=\"twoRedeliveries\" maximumRedeliveries=\"2\"/> <onException redeliveryPolicyRef=\"twoRedeliveries\"> <exception>com.mycompany.MyUserException</exception> <onWhen> <simple>USD{header.user} != null</simple> </onWhen> <to uri=\"activemq:error_user_queue\"/> </onException> <onException redeliveryPolicyRef=\"twoRedeliveries\"> <exception>com.mycompany.MyUserException</exception> <to uri=\"activemq:error_queue\"/> </onException>", "onException(ValidationException.class) .handled(true) .to(\"activemq:validationFailed\");", "<onException> <exception>com.mycompany.ValidationException</exception> <handled> <constant>true</constant> </handled> <to uri=\"activemq:validationFailed\"/> </onException>", "onException(ValidationException.class) .continued(true);", "<onException> <exception>com.mycompany.ValidationException</exception> <continued> <constant>true</constant> </continued> </onException>", "// we catch MyFunctionalException and want to mark it as handled (= no failure returned to client) // but we want to return a fixed text response, so we transform OUT body as Sorry. onException(MyFunctionalException.class) .handled(true) .transform().constant(\"Sorry\");", "// we catch MyFunctionalException and want to mark it as handled (= no failure returned to client) // but we want to return a fixed text response, so we transform OUT body and return the exception message onException(MyFunctionalException.class) .handled(true) .transform(exceptionMessage());", "// we catch MyFunctionalException and want to mark it as handled (= no failure returned to client) // but we want to return a fixed text response, so we transform OUT body and return a nice message // using the simple language where we want insert the exception message onException(MyFunctionalException.class) .handled(true) .transform().simple(\"Error reported: USD{exception.message} - cannot process this message.\");", "<onException> <exception>com.mycompany.MyFunctionalException</exception> <handled> <constant>true</constant> </handled> <transform> <simple>Error reported: USD{exception.message} - cannot process this message.</simple> </transform> </onException>", "// Java from(\"direct:start\") .onException(OrderFailedException.class) .maximumRedeliveries(1) .handled(true) .beanRef(\"orderService\", \"orderFailed\") .to(\"mock:error\") .end() .beanRef(\"orderService\", \"handleOrder\") .to(\"mock:result\");", "<route errorHandlerRef=\"deadLetter\"> <from uri=\"direct:start\"/> <onException> <exception>com.mycompany.OrderFailedException</exception> <redeliveryPolicy maximumRedeliveries=\"1\"/> <handled> <constant>true</constant> </handled> <bean ref=\"orderService\" method=\"orderFailed\"/> <to uri=\"mock:error\"/> </onException> <bean ref=\"orderService\" method=\"handleOrder\"/> <to uri=\"mock:result\"/> </route>", "public class MyRouteBuilder extends RouteBuilder { public void configure() { errorHandler(deadLetterChannel(\"activemq:deadLetter\")); // The preceding error handler applies // to all of the following routes: from(\"activemq:orderQueue\") .to(\"pop3://[email protected]\"); from(\"file:src/data?noop=true\") .to(\"file:target/messages\"); // } }", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:camel=\"http://camel.apache.org/schema/spring\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <errorHandler type=\"DeadLetterChannel\" deadLetterUri=\"activemq:deadLetter\"/> <route> <from uri=\"activemq:orderQueue\"/> <to uri=\"pop3://[email protected]\"/> </route> <route> <from uri=\"file:src/data?noop=true\"/> <to uri=\"file:target/messages\"/> </route> </camelContext> </beans>", "from(\"direct:start\") .doTry() .process(new ProcessorFail()) .to(\"mock:result\") .doCatch(IOException.class, IllegalStateException.class) .to(\"mock:catch\") .doFinally() .to(\"mock:finally\") .end();", "<route> <from uri=\"direct:start\"/> <!-- here the try starts. its a try .. catch .. finally just as regular java code --> <doTry> <process ref=\"processorFail\"/> <to uri=\"mock:result\"/> <doCatch> <!-- catch multiple exceptions --> <exception>java.io.IOException</exception> <exception>java.lang.IllegalStateException</exception> <to uri=\"mock:catch\"/> </doCatch> <doFinally> <to uri=\"mock:finally\"/> </doFinally> </doTry> </route>", "from(\"direct:start\") .doTry() .process(new ProcessorFail()) .to(\"mock:result\") .doCatch(IOException.class) .to(\"mock:io\") // Rethrow the exception using a construct instead of handled(false) which is deprecated in a doTry/doCatch clause. .throwException(new IllegalArgumentException(\"Forced\")) .doCatch(Exception.class) // Catch all other exceptions. .to(\"mock:error\") .end();", ".process(exchange -> {throw exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class);})", "<route> <from uri=\"direct:start\"/> <doTry> <process ref=\"processorFail\"/> <to uri=\"mock:result\"/> <doCatch> <to uri=\"mock:io\"/> <throwException message=\"Forced\" exceptionType=\"java.lang.IllegalArgumentException\"/> </doCatch> <doCatch> <!-- Catch all other exceptions. --> <exception>java.lang.Exception</exception> <to uri=\"mock:error\"/> </doCatch> </doTry> </route>", "from(\"direct:start\") .doTry() .process(new ProcessorFail()) .to(\"mock:result\") .doCatch(IOException.class, IllegalStateException.class) . onWhen (exceptionMessage().contains(\"Severe\")) .to(\"mock:catch\") .doCatch(CamelExchangeException.class) .to(\"mock:catchCamel\") .doFinally() .to(\"mock:finally\") .end();", "<route> <from uri=\"direct:start\"/> <doTry> <process ref=\"processorFail\"/> <to uri=\"mock:result\"/> <doCatch> <exception>java.io.IOException</exception> <exception>java.lang.IllegalStateException</exception> <onWhen> <simple>USD{exception.message} contains 'Severe'</simple> </onWhen> <to uri=\"mock:catch\"/> </doCatch> <doCatch> <exception>org.apache.camel.CamelExchangeException</exception> <to uri=\"mock:catchCamel\"/> </doCatch> <doFinally> <to uri=\"mock:finally\"/> </doFinally> </doTry> </route>", "from(\"direct:wayne-get-token\").setExchangePattern(ExchangePattern.InOut) .doTry() .to(\"https4://wayne-token-service\") .choice() .when().simple(\"USD{header.CamelHttpResponseCode} == '200'\") .convertBodyTo(String.class) .setHeader(\"wayne-token\").groovy(\"body.replaceAll('\\\"','')\") .log(\">> Wayne Token : USD{header.wayne-token}\") .endChoice() .doCatch(java.lang.Class (java.lang.Exception>) .log(\">> Exception\") .endDoTry(); from(\"direct:wayne-get-token\").setExchangePattern(ExchangePattern.InOut) .doTry() .to(\"https4://wayne-token-service\") .doCatch(Exception.class) .log(\">> Exception\") .endDoTry();", "<cxf:cxfEndpoint id=\"router\" address=\"http://localhost:9002/TestMessage\" wsdlURL=\"ship.wsdl\" endpointName=\"s:TestSoapEndpoint\" serviceName=\"s:TestService\" xmlns:s=\"http://test\"> <cxf:properties> <!-- enable sending the stack trace back to client; the default value is false--> <entry key=\"faultStackTraceEnabled\" value=\"true\" /> <entry key=\"dataFormat\" value=\"PAYLOAD\" /> </cxf:properties> </cxf:cxfEndpoint>", "<cxf:cxfEndpoint id=\"router\" address=\"http://localhost:9002/TestMessage\" wsdlURL=\"ship.wsdl\" endpointName=\"s:TestSoapEndpoint\" serviceName=\"s:TestService\" xmlns:s=\"http://test\"> <cxf:properties> <!-- enable to show the cause exception message and the default value is false --> <entry key=\"exceptionMessageCauseEnabled\" value=\"true\" /> <!-- enable to send the stack trace back to client, the default value is false--> <entry key=\"faultStackTraceEnabled\" value=\"true\" /> <entry key=\"dataFormat\" value=\"PAYLOAD\" /> </cxf:properties> </cxf:cxfEndpoint>", "from(\"file:data/inbound\") .bean(MyBeanProcessor.class, \"processBody\") .to(\"file:data/outbound\");", "MyBeanProcessor myBean = new MyBeanProcessor(); from(\"file:data/inbound\") .bean(myBean, \"processBody\") .to(\"file:data/outbound\"); from(\"activemq:inboundData\") .bean(myBean, \"processBody\") .to(\"activemq:outboundData\");", "from(\"file:data/inbound\") .bean(MyBeanProcessor.class, \"processBody(String,String)\") .to(\"file:data/outbound\");", "from(\"file:data/inbound\") .bean(MyBeanProcessor.class, \"processBody(*,*)\") .to(\"file:data/outbound\");", "from(\"file:data/inbound\") .bean(MyBeanProcessor.class, \"processBody(String, 'Sample string value', true, 7)\") .to(\"file:data/outbound\");", "from(\"file:data/inbound\") .bean(MyBeanProcessor.class, \"processBodyAndHeader(USD{body},USD{header.title})\") .to(\"file:data/outbound\");", "from(\"file:data/inbound\") .bean(MyBeanProcessor.class, \"processBodyAndAllHeaders(USD{body},USD{header})\") .to(\"file:data/outbound\");", "// Java package com.acme; public class MyBeanProcessor { public String processBody(String body) { // Do whatever you like to 'body' return newBody; } }", "// Java package com.acme; public class MyBeanProcessor { public void processExchange(Exchange exchange) { // Do whatever you like to 'exchange' exchange.getIn().setBody(\"Here is a new message body!\"); } }", "<beans ...> <bean id=\"myBeanId\" class=\"com.acme.MyBeanProcessor\"/> </beans>", "from(\"file:data/inbound\").beanRef(\"myBeanId\", \"processBody\").to(\"file:data/outbound\");", "<camelContext id=\"CamelContextID\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"file:data/inbound\"/> <bean ref=\"myBeanId\" method=\"processBody\"/> <to uri=\"file:data/outbound\"/> </route> </camelContext>", "<bean ref=\"myBeanId\" method=\"processBody\" cache=\"true\"/>", "from(\"file:data/inbound\").beanRef(\"myBeanId\", \"processBody\").to(\"file:data/outbound\");", "// Java import org.apache.camel.@BeanInject; public class MyRouteBuilder extends RouteBuilder { @BeanInject(\"myBeanId\") com.acme.MyBeanProcessor bean; public void configure() throws Exception { .. } }", "@BeanInject com.acme.MyBeanProcessor bean;", "// Java import org.apache.camel.*; public class MyBeanProcessor { public void processExchange( @Header(name=\"user\") String user, @Body String body, Exchange exchange ) { // Do whatever you like to 'exchange' exchange.getIn().setBody(body + \"UserName = \" + user); } }", "// Java import org.apache.camel.language.*; public class MyBeanProcessor { public void checkCredentials( @XPath(\"/credentials/username/text()\") String user, @XPath(\"/credentials/password/text()\") String pass ) { // Check the user/pass credentials } }", "// Java import org.apache.camel.language.*; public class MyBeanProcessor { public void processCorrelatedMsg( @Bean(\"myCorrIdGenerator\") String corrId, @Body String body ) { // Check the user/pass credentials } }", "<beans ...> <bean id=\"myCorrIdGenerator\" class=\"com.acme.MyIdGenerator\"/> </beans>", "// Java package com.acme; public class MyIdGenerator { private UserManager userManager; public String generate( @Header(name = \"user\") String user, @Body String payload ) throws Exception { User user = userManager.lookupUser(user); String userId = user.getPrimaryId(); String id = userId + generateHashCodeForPayload(payload); return id; } }", "// Java import org.apache.camel.*; public interface MyBeanProcessorIntf { void processExchange( @Header(name=\"user\") String user, @Body String body, Exchange exchange ); }", "// Java import org.apache.camel.*; public class MyBeanProcessor implements MyBeanProcessorIntf { public void processExchange( String user, // Inherits Header annotation String body, // Inherits Body annotation Exchange exchange ) { } }", "// Java public interface BeanIntf { void processBodyAndHeader(String body, String title); }", "// Java protected class BeanIntfImpl implements BeanIntf { void processBodyAndHeader(String body, String title) { } }", "from(\"file:data/inbound\") .bean(BeanIntfImpl.class, \"processBodyAndHeader(USD{body}, USD{header.title})\") .to(\"file:data/outbound\");", "// Java public final class MyStaticClass { private MyStaticClass() { } public static String changeSomething(String s) { if (\"Hello World\".equals(s)) { return \"Bye World\"; } return null; } public void doSomething() { // noop } }", "from(\"direct:a\") *.bean(MyStaticClass.class, \"changeSomething\")* .to(\"mock:a\");", "from(\"file:data/inbound\") .bean(org.fusesource.example.HelloWorldOsgiService.class, \"sayHello\") .to(\"file:data/outbound\");", "<to uri=\"bean:org.fusesource.example.HelloWorldOsgiService?method=sayHello\"/>", "org.apache.camel.builder.ExchangeBuilder", "// Java import org.apache.camel.Exchange; import org.apache.camel.builder.ExchangeBuilder; Exchange exch = ExchangeBuilder.anExchange(camelCtx) .withBody(\"Hello World!\") .withHeader(\"username\", \"jdoe\") .withHeader(\"password\", \"pass\") .build();", "from(\" SourceURL \").setBody(body().append(\" World!\")).to(\" TargetURL \");", "from(\" SourceURL \").unmarshal().serialization() . <FurtherProcessing> .to(\" TargetURL \");", "<camelContext id=\"serialization\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\" SourceURL \"/> <unmarshal> <serialization/> </unmarshal> <to uri=\" TargetURL \"/> </route> </camelContext>", "org.apache.camel.spi.DataFormat jaxb = new org.apache.camel.converter.jaxb.JaxbDataFormat(\" GeneratedPackageName \"); from(\" SourceURL \").unmarshal(jaxb) . <FurtherProcessing> .to(\" TargetURL \");", "<camelContext id=\"jaxb\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\" SourceURL \"/> <unmarshal> <jaxb prettyPrint=\"true\" contextPath=\" GeneratedPackageName \"/> </unmarshal> <to uri=\" TargetURL \"/> </route> </camelContext>", "from(\" SourceURL \").unmarshal().xmlBeans() . <FurtherProcessing> .to(\" TargetURL \");", "<camelContext id=\"xmlBeans\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\" SourceURL \"/> <unmarshal> <xmlBeans prettyPrint=\"true\"/> </unmarshal> <to uri=\" TargetURL \"/> </route> </camelContext>", "from(\" SourceURL \").unmarshal().xstream() . <FurtherProcessing> .to(\" TargetURL \");", "<beans ... > <bean id=\" jaxb \" class=\"org.apache.camel.processor.binding.DataFormatBinding\"> <constructor-arg ref=\"jaxbformat\"/> </bean> <bean id=\"jaxbformat\" class=\"org.apache.camel.model.dataformat.JaxbDataFormat\"> <property name=\"prettyPrint\" value=\"true\"/> <property name=\"contextPath\" value=\"org.apache.camel.example\"/> </bean> </beans>", "<beans ...> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\" binding:jaxb: activemq:orderQueue\"/> <to uri=\" binding:jaxb: activemq:otherQueue\"/> </route> </camelContext> </beans>", "<beans ... > <bean id=\" jaxbmq \" class=\"org.apache.camel.component.binding.BindingComponent\"> <constructor-arg ref=\"jaxb\"/> <constructor-arg value=\"activemq:foo.\"/> </bean> <bean id=\"jaxb\" class=\"org.apache.camel.processor.binding.DataFormatBinding\"> <constructor-arg ref=\"jaxbformat\"/> </bean> <bean id=\"jaxbformat\" class=\"org.apache.camel.model.dataformat.JaxbDataFormat\"> <property name=\"prettyPrint\" value=\"true\"/> <property name=\"contextPath\" value=\"org.apache.camel.example\"/> </bean> </beans>", "<beans ...> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\" jaxbmq: firstQueue\"/> <to uri=\" jaxbmq: otherQueue\"/> </route> </camelContext> </beans>", "<beans ...> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\" binding:jaxb:activemq:foo. firstQueue\"/> <to uri=\" binding:jaxb:activemq:foo. otherQueue\"/> </route> </camelContext> </beans>", "// Java package org.apache.camel.spi; import org.apache.camel.Processor; /** * Represents a <a href=\"http://camel.apache.org/binding.html\">Binding</a> or contract * which can be applied to an Endpoint; such as ensuring that a particular * <a href=\"http://camel.apache.org/data-format.html\">Data Format</a> is used on messages in and out of an endpoint. */ public interface Binding { /** * Returns a new {@link Processor} which is used by a producer on an endpoint to implement * the producer side binding before the message is sent to the underlying endpoint. */ Processor createProduceProcessor(); /** * Returns a new {@link Processor} which is used by a consumer on an endpoint to process the * message with the binding before its passed to the endpoint consumer producer. */ Processor createConsumeProcessor(); }", "from(\"direct:start\").to(\"http://{{remote.host}}:{{remote.port}}\");", "Java properties file remote.host=myserver.com remote.port=8080", "Property placeholder settings (in Java properties file format) cool.end=mock:result cool.result=result cool.concat=mock:{{cool.result}} cool.start=direct:cool cool.showid=true cheese.end=mock:cheese cheese.quote=Camel rocks cheese.type=Gouda bean.foo=foo bean.bar=bar", "com/fusesource/cheese.properties,com/fusesource/bar.properties", "file:USD{karaf.home}/etc/foo.properties", "file:USD{env:SMX_HOME}/etc/foo.properties", "// Java import org.apache.camel.component.properties.PropertiesComponent; PropertiesComponent pc = new PropertiesComponent(); pc.setLocation(\"com/fusesource/cheese.properties,com/fusesource/bar.properties\"); context.addComponent(\"properties\", pc);", "<camelContext ...> <propertyPlaceholder id=\"properties\" location=\"com/fusesource/cheese.properties,com/fusesource/bar.properties\" /> </camelContext>", "AttributeName =\"{{ Key }}\"", "prop: AttributeName =\" Key \"", ".placeholder(\" OptionName \", \" Key \")", "from(\"{{cool.start}}\") .to(\"log:{{cool.start}}?showBodyType=false&showExchangeId={{cool.showid}}\") .to(\"mock:{{cool.result}}\");", "from(\"properties:{{cool.start}}\") .to(\"properties:log:{{cool.start}}?showBodyType=false&showExchangeId={{cool.showid}}\") .to(\"properties:mock:{{cool.result}}\");", "from(\"direct:start\").to(\"properties:{{bar.end}}?location=com/mycompany/bar.properties\");", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <propertyPlaceholder id=\"properties\" location=\"org/apache/camel/spring/jmx.properties\"/> <!-- we can use property placeholders when we define the JMX agent --> <jmxAgent id=\"agent\" registryPort=\"{{myjmx.port}}\" usePlatformMBeanServer=\"{{myjmx.usePlatform}}\" createConnector=\"true\" statisticsLevel=\"RoutesOnly\" /> <route> <from uri=\"seda:start\"/> <to uri=\"mock:result\"/> </route> </camelContext>", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:prop=\"http://camel.apache.org/schema/placeholder\" ... > <bean id=\"illegal\" class=\"java.lang.IllegalArgumentException\"> <constructor-arg index=\"0\" value=\"Good grief!\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <propertyPlaceholder id=\"properties\" location=\"classpath:org/apache/camel/component/properties/myprop.properties\" xmlns=\"http://camel.apache.org/schema/spring\"/> <route> <from uri=\"direct:start\"/> <multicast prop:stopOnException=\"stop.flag\" > <to uri=\"mock:a\"/> <throwException ref=\"damn\"/> <to uri=\"mock:b\"/> </multicast> </route> </camelContext> </beans>", "from(\"direct:start\") .multicast().placeholder(\"stopOnException\", \"stop.flag\") .to(\"mock:a\").throwException(new IllegalAccessException(\"Damn\")).to(\"mock:b\");", "from(\"direct:start\") .transform().simple(\"Hi USD{body} do you think USD{properties:cheese.quote}?\");", "from(\"direct:start\") .transform().simple(\"Hi USD{body} do you think USD{properties:cheese.quote:cheese is good}?\");", "from(\"direct:start\") .transform().simple(\"Hi USD{body}. USD{properties-location:com/mycompany/bar.properties:bar.quote}.\");", "stop=true", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:prop=\"http://camel.apache.org/schema/placeholder\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd \"> <!-- Notice in the declaration above, we have defined the prop prefix as the Camel placeholder namespace --> <bean id=\"damn\" class=\"java.lang.IllegalArgumentException\"> <constructor-arg index=\"0\" value=\"Damn\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <propertyPlaceholder id=\"properties\" location=\"classpath:org/apache/camel/component/properties/myprop.properties\" xmlns=\"http://camel.apache.org/schema/spring\"/> <route> <from uri=\"direct:start\"/> <!-- use prop namespace, to define a property placeholder, which maps to option stopOnException={{stop}} --> <multicast prop:stopOnException=\"stop\"> <to uri=\"mock:a\"/> <throwException ref=\"damn\"/> <to uri=\"mock:b\"/> </multicast> </route> </camelContext> </beans>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\"> <!-- OSGI blueprint property placeholder --> <cm:property-placeholder id=\"myblueprint.placeholder\" persistent-id=\"camel.blueprint\"> <!-- list some properties for this test --> <cm:default-properties> <cm:property name=\"result\" value=\"mock:result\"/> </cm:default-properties> </cm:property-placeholder> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <!-- in the route we can use {{ }} placeholders which will look up in blueprint, as Camel will auto detect the OSGi blueprint property placeholder and use it --> <route> <from uri=\"direct:start\"/> <to uri=\"mock:foo\"/> <to uri=\"{{result}}\"/> </route> </camelContext> </blueprint>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 \">https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\"> <!-- OSGI blueprint property placeholder --> <cm:property-placeholder id=\" myblueprint.placeholder \" persistent-id=\"camel.blueprint\"> <!-- list some properties for this test --> <cm:default-properties> <cm:property name=\"result\" value=\"mock:result\"/> </cm:default-properties> </cm:property-placeholder> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <!-- using Camel properties component and refer to the blueprint property placeholder by its id --> <propertyPlaceholder id=\"properties\" location=\"blueprint:myblueprint.placeholder\"/> <!-- in the route we can use {{ }} placeholders which will lookup in blueprint --> <route> <from uri=\"direct:start\"/> <to uri=\"mock:foo\"/> <to uri=\"{{result}}\"/> </route> </camelContext> </blueprint>", "<propertyPlaceholder id=\"properties\" location=\"blueprint:myblueprint.placeholder,classpath:myproperties.properties\"/>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:ctx=\"http://www.springframework.org/schema/context\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd\"> <!-- Bridge Spring property placeholder with Camel --> <!-- Do not use <ctx:property-placeholder ... > at the same time --> <bean id=\"bridgePropertyPlaceholder\" class=\"org.apache.camel.spring.spi.BridgePropertyPlaceholderConfigurer\"> <property name=\"location\" value=\"classpath:org/apache/camel/component/properties/cheese.properties\"/> </bean> <!-- A bean that uses Spring property placeholder --> <!-- The USD{hi} is a spring property placeholder --> <bean id=\"hello\" class=\"org.apache.camel.component.properties.HelloBean\"> <property name=\"greeting\" value=\"USD{hi}\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <!-- Use Camel's property placeholder {{ }} style --> <route> <from uri=\"direct:{{cool.bar}}\"/> <bean ref=\"hello\"/> <to uri=\"{{cool.end}}\"/> </route> </camelContext> </beans>", "parallelProcessing() executorService() executorServiceRef()", "@parallelProcessing @executorServiceRef", "parallelProcessing() executorService() executorServiceRef()", "@parallelProcessing @executorServiceRef", "parallelProcessing() executorService() executorServiceRef()", "@parallelProcessing @executorServiceRef", "parallelProcessing() executorService() executorServiceRef()", "@parallelProcessing @executorServiceRef", "executorService() executorServiceRef() poolSize() maxPoolSize() keepAliveTime() timeUnit() maxQueueSize() rejectedPolicy()", "@executorServiceRef @poolSize @maxPoolSize @keepAliveTime @timeUnit @maxQueueSize @rejectedPolicy", "wireTap(String uri, ExecutorService executorService) wireTap(String uri, String executorServiceRef)", "@executorServiceRef", "from(\"direct:start\") .multicast().parallelProcessing() .to(\"mock:first\") .to(\"mock:second\") .to(\"mock:third\");", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <multicast parallelProcessing=\"true\"> <to uri=\"mock:first\"/> <to uri=\"mock:second\"/> <to uri=\"mock:third\"/> </multicast> </route> </camelContext>", "// Java import org.apache.camel.spi.ExecutorServiceManager; import org.apache.camel.spi.ThreadPoolProfile; ExecutorServiceManager manager = context.getExecutorServiceManager(); ThreadPoolProfile defaultProfile = manager.getDefaultThreadPoolProfile(); // Now, customize the profile settings. defaultProfile.setPoolSize(3); defaultProfile.setMaxQueueSize(100);", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <threadPoolProfile id=\"changedProfile\" defaultProfile=\"true\" poolSize=\"3\" maxQueueSize=\"100\"/> </camelContext>", "// Java import org.apache.camel.builder.ThreadPoolBuilder; import java.util.concurrent.ExecutorService; ThreadPoolBuilder poolBuilder = new ThreadPoolBuilder(context); ExecutorService customPool = poolBuilder.poolSize(5).maxPoolSize(5).maxQueueSize(100).build(\"customPool\"); from(\"direct:start\") .multicast().executorService(customPool) .to(\"mock:first\") .to(\"mock:second\") .to(\"mock:third\");", "// Java from(\"direct:start\") .multicast().executorServiceRef(\"customPool\") .to(\"mock:first\") .to(\"mock:second\") .to(\"mock:third\");", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <threadPool id=\"customPool\" poolSize=\"5\" maxPoolSize=\"5\" maxQueueSize=\"100\" /> <route> <from uri=\"direct:start\"/> <multicast executorServiceRef=\"customPool\"> <to uri=\"mock:first\"/> <to uri=\"mock:second\"/> <to uri=\"mock:third\"/> </multicast> </route> </camelContext>", "// Java import org.apache.camel.spi.ThreadPoolProfile; import org.apache.camel.impl.ThreadPoolProfileSupport; // Create the custom thread pool profile ThreadPoolProfile customProfile = new ThreadPoolProfileSupport(\"customProfile\"); customProfile.setPoolSize(5); customProfile.setMaxPoolSize(5); customProfile.setMaxQueueSize(100); context.getExecutorServiceManager().registerThreadPoolProfile(customProfile); // Reference the custom thread pool profile in a route from(\"direct:start\") .multicast().executorServiceRef(\"customProfile\") .to(\"mock:first\") .to(\"mock:second\") .to(\"mock:third\");", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <threadPoolProfile id=\"customProfile\" poolSize=\"5\" maxPoolSize=\"5\" maxQueueSize=\"100\" /> <route> <from uri=\"direct:start\"/> <multicast executorServiceRef=\"customProfile\"> <to uri=\"mock:first\"/> <to uri=\"mock:second\"/> <to uri=\"mock:third\"/> </multicast> </route> </camelContext>", "Camel (#camelId#) thread #counter# - #name#", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\" threadNamePattern=\"Riding the thread #counter#\" > <route> <from uri=\"seda:start\"/> <to uri=\"log:result\"/> <to uri=\"mock:result\"/> </route> </camelContext>", "from(\" SourceURI \").routeId(\"myCustomRouteId\").process(...).to( TargetURI );", "<camelContext id=\" CamelContextID \" xmlns=\"http://camel.apache.org/schema/spring\"> <route id =\"myCustomRouteId\" > <from uri=\" SourceURI \"/> <process ref=\"someProcessorId\"/> <to uri=\" TargetURI \"/> </route> </camelContext>", "from(\" SourceURI \") .routeId(\"nonAuto\") .autoStartup(false) .to( TargetURI );", "<camelContext id=\" CamelContextID \" xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"nonAuto\" autoStartup=\"false\"> <from uri=\" SourceURI \"/> <to uri=\" TargetURI \"/> </route> </camelContext>", "// Java context.startRoute(\"nonAuto\");", "// Java context.stopRoute(\"nonAuto\");", "from(\"jetty:http://fooserver:8080\") .routeId(\"first\") .startupOrder(2) .to(\"seda:buffer\"); from(\"seda:buffer\") .routeId(\"second\") .startupOrder(1) .to(\"mock:result\"); // This route's startup order is unspecified from(\"jms:queue:foo\").to(\"jms:queue:bar\");", "<route id=\"first\" startupOrder=\"2\"> <from uri=\"jetty:http://fooserver:8080\"/> <to uri=\"seda:buffer\"/> </route> <route id=\"second\" startupOrder=\"1\"> <from uri=\"seda:buffer\"/> <to uri=\"mock:result\"/> </route> <!-- This route's startup order is unspecified --> <route> <from uri=\"jms:queue:foo\"/> <to uri=\"jms:queue:bar\"/> </route>", "// Java public void configure() throws Exception { from(\"file:target/pending\") .routeId(\"first\").startupOrder(2) . shutdownRunningTask(ShutdownRunningTask.CompleteAllTasks) .delay(1000).to(\"seda:foo\"); from(\"seda:foo\") .routeId(\"second\").startupOrder(1) .to(\"mock:bar\"); }", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <!-- let this route complete all its pending messages when asked to shut down --> <route id=\"first\" startupOrder=\"2\" shutdownRunningTask=\"CompleteAllTasks\" > <from uri=\"file:target/pending\"/> <delay><constant>1000</constant></delay> <to uri=\"seda:foo\"/> </route> <route id=\"second\" startupOrder=\"1\"> <from uri=\"seda:foo\"/> <to uri=\"mock:bar\"/> </route> </camelContext>", "// Java // context = CamelContext instance context.getShutdownStrategy().setTimeout(600);", "context.setNodeIdFactory(new RouteIdFactory());", "org.apache.camel.routepolicy.quartz.SimpleScheduledRoutePolicy", "// Java SimpleScheduledRoutePolicy policy = new SimpleScheduledRoutePolicy(); long startTime = System.currentTimeMillis() + 3000L; policy.setRouteStartDate(new Date(startTime)); policy.setRouteStartRepeatCount(1); policy.setRouteStartRepeatInterval(3000); from(\"direct:start\") .routeId(\"test\") . routePolicy(policy) .to(\"mock:success\");", "<bean id=\"date\" class=\"java.util.Data\"/> <bean id=\"startPolicy\" class=\"org.apache.camel.routepolicy.quartz.SimpleScheduledRoutePolicy\"> <property name=\"routeStartDate\" ref=\"date\"/> <property name=\"routeStartRepeatCount\" value=\"1\"/> <property name=\"routeStartRepeatInterval\" value=\"3000\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"myroute\" routePolicyRef=\"startPolicy\" > <from uri=\"direct:start\"/> <to uri=\"mock:success\"/> </route> </camelContext>", "// Java import java.util.GregorianCalendar; import java.util.Calendar; GregorianCalendar gc = new GregorianCalendar( 2011, Calendar.JANUARY, 1, 12, // hourOfDay 0, // minutes 0 // seconds ); java.util.Date triggerDate = gc.getTime();", "2015-01-12 13:23:23,656 [- ShutdownTask] INFO DefaultShutdownStrategy - There are 1 inflight exchanges: InflightExchange: [exchangeId=ID-davsclaus-air-62213-1421065401253-0-3, fromRouteId=route1, routeId=route1, nodeId=delay1, elapsed=2007, duration=2017]", "context.getShutdownStrategegy().setLogInflightExchangesOnTimeout(false);", "org.apache.camel.routepolicy.quartz.CronScheduledRoutePolicy", "// Java CronScheduledRoutePolicy policy = new CronScheduledRoutePolicy(); policy.setRouteStartTime(\"*/3 * * * * ?\"); from(\"direct:start\") .routeId(\"test\") .routePolicy(policy) .to(\"mock:success\");;", "<bean id=\"date\" class=\"org.apache.camel.routepolicy.quartz.SimpleDate\"/> <bean id=\"startPolicy\" class=\"org.apache.camel.routepolicy.quartz.CronScheduledRoutePolicy\"> <property name=\"routeStartTime\" value=\"*/3 * * * * ?\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"testRoute\" routePolicyRef=\"startPolicy\"> <from uri=\"direct:start\"/> <to uri=\"mock:success\"/> </route> </camelContext>", "Seconds Minutes Hours DayOfMonth Month DayOfWeek [Year]", "0 0 24 * * ?", "0 0 24 ? * MON-FRI", "0 0/5 * * * ?", "context.addRoutePolicyFactory(new MyRoutePolicyFactory());", "<bean id=\"myRoutePolicyFactory\" class=\"com.foo.MyRoutePolicyFactory\"/>", "/** * Creates a new {@link org.apache.camel.spi.RoutePolicy} which will be assigned to the given route. * * @param camelContext the camel context * @param routeId the route id * @param route the route definition * @return the created {@link org.apache.camel.spi.RoutePolicy}, or <tt>null</tt> to not use a policy for this route */ RoutePolicy createRoutePolicy(CamelContext camelContext, String routeId, RouteDefinition route);", "cd examples/camel-example-spring mvn camel:run", "<plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <configuration> <useBlueprint>true</useBlueprint> </configuration> </plugin>", "<plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <configuration> <useBlueprint>true</useBlueprint> <applicationContextUri>myBlueprint.xml</applicationContextUri> <!-- ConfigAdmin options which have been added since Camel 2.12.0 --> <configAdminPid>test</configAdminPid> <configAdminFileName>/user/test/etc/test.cfg</configAdminFileName> </configuration> </plugin>", "cd examples/camel-example-cdi mvn compile camel:run", "<plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <configuration> <logClasspath>true</logClasspath> </configuration> </plugin>", "<plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <configuration> <fileWatcherDirectory>src/main/resources/META-INF/spring</fileWatcherDirectory> </configuration> </plugin>", "mvn camel:validate", "<plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <goals> <goal>validate</goal> </goals> </execution> </executions> </plugin>", "<plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>camel-maven-plugin</artifactId> <executions> <execution> <configuration> <includeTest>true</includeTest> </configuration> <phase>process-test-classes</phase> <goals> <goal>validate</goal> </goals> </execution> </executions> </plugin>", "USDcd camel-example-cdi USDmvn org.apache.camel:camel-maven-plugin:2.20.0:validate", "[INFO] ------------------------------------------------------------------------ [INFO] Building Camel :: Example :: CDI 2.20.0 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- camel-maven-plugin:2.20.0:validate (default-cli) @ camel-example-cdi --- [INFO] Endpoint validation success: (4 = passed, 0 = invalid, 0 = incapable, 0 = unknown components) [INFO] Simple validation success: (0 = passed, 0 = invalid) [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------", "@Uri(\"timer:foo?period=5000\")", "@Uri(\"timer:foo?perid=5000\")", "[INFO] ------------------------------------------------------------------------ [INFO] Building Camel :: Example :: CDI 2.20.0 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- camel-maven-plugin:2.20.0:validate (default-cli) @ camel-example-cdi --- [WARNING] Endpoint validation error at: org.apache.camel.example.cdi.MyRoutes(MyRoutes.java:32) timer:foo?perid=5000 perid Unknown option. Did you mean: [period] [WARNING] Endpoint validation error: (3 = passed, 1 = invalid, 0 = incapable, 0 = unknown components) [INFO] Simple validation success: (0 = passed, 0 = invalid) [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------", "USDmvn camel:validate -Dcamel.ignoreDeprecated=false", "USDcd myproject USDmvn org.apache.camel:camel-maven-plugin:2.20.0:validate -DincludeTest=true", "<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <systemPropertyVariables> <CamelTestRouteCoverage>true</CamelTestRouteCoverage> </systemPropertyVariables> </configuration> </plugin>", "mvn clean test -DCamelTestRouteCoverage=true", "@RunWith(CamelSpringBootRunner.class) @SpringBootTest(classes = SampleCamelApplication.class) @EnableRouteCoverage public class FooApplicationTest {", "@Override public boolean isDumpRouteCoverage() { return true; }", "from(\"jms:queue:cheese\").routeId(\"cheesy\") .to(\"log:foo\")", "<route id=\"cheesy\"> <from uri=\"jms:queue:cheese\"/> <to uri=\"log:foo\"/> </route>", "mvn test", "mvn camel:route-coverage", "[INFO] --- camel-maven-plugin:2.21.0:route-coverage (default-cli) @ camel-example-spring-boot-xml --- [INFO] Discovered 1 routes [INFO] Route coverage summary: File: src/main/resources/my-camel.xml RouteId: hello Line # Count Route ------ ----- ----- 28 1 from 29 1 transform 32 1 filter 34 0 to 36 1 to Coverage: 4 out of 5 (80.0%)", "public class MainExample { private Main main; public static void main(String[] args) throws Exception { MainExample example = new MainExample(); example.boot(); } public void boot() throws Exception { // create a Main instance main = new Main(); // bind MyBean into the registry main.bind(\"foo\", new MyBean()); // add routes main.addRouteBuilder(new MyRouteBuilder()); // add event listener main.addMainListener(new Events()); // set the properties from a file main.setPropertyPlaceholderLocations(\"example.properties\"); // run until you terminate the JVM System.out.println(\"Starting Camel. Use ctrl + c to terminate the JVM.\\n\"); main.run(); } private static class MyRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:foo?delay={{millisecs}}\") .process(new Processor() { public void process(Exchange exchange) throws Exception { System.out.println(\"Invoked timer at \" + new Date()); } }) .bean(\"foo\"); } } public static class MyBean { public void callMe() { System.out.println(\"MyBean.callMe method has been called\"); } } public static class Events extends MainListenerSupport { @Override public void afterStart(MainSupport main) { System.out.println(\"MainExample with Camel is now started!\"); } @Override public void beforeStop(MainSupport main) { System.out.println(\"MainExample with Camel is now being stopped!\"); } } }", "from(\"direct:start\") .onCompletion() // This route is invoked when the original route is complete. // This is similar to a completion callback. .to(\"log:sync\") .to(\"mock:sync\") // Must use end to denote the end of the onCompletion route. .end() // here the original route contiues .process(new MyProcessor()) .to(\"mock:result\");", "<route> <from uri=\"direct:start\"/> <!-- This onCompletion block is executed when the exchange is done being routed. --> <!-- This callback is always triggered even if the exchange fails. --> <onCompletion> <!-- This is similar to an after completion callback. --> <to uri=\"log:sync\"/> <to uri=\"mock:sync\"/> </onCompletion> <process ref=\"myProcessor\"/> <to uri=\"mock:result\"/> </route>", "from(\"direct:start\") // Here onCompletion is qualified to invoke only when the exchange fails (exception or FAULT body). .onCompletion().onFailureOnly() .to(\"log:sync\") .to(\"mock:sync\") // Must use end to denote the end of the onCompletion route. .end() // here the original route continues .process(new MyProcessor()) .to(\"mock:result\");", "<route> <from uri=\"direct:start\"/> <!-- this onCompletion block will only be executed when the exchange is done being routed --> <!-- this callback is only triggered when the exchange failed, as we have onFailure=true --> <onCompletion onFailureOnly=\"true\"> <to uri=\"log:sync\"/> <to uri=\"mock:sync\"/> </onCompletion> <process ref=\"myProcessor\"/> <to uri=\"mock:result\"/> </route>", "// define a global on completion that is invoked when the exchange is complete onCompletion().to(\"log:global\").to(\"mock:sync\"); from(\"direct:start\") .process(new MyProcessor()) .to(\"mock:result\");", "/from(\"direct:start\") .onCompletion().onWhen(body().contains(\"Hello\")) // this route is only invoked when the original route is complete as a kind // of completion callback. And also only if the onWhen predicate is true .to(\"log:sync\") .to(\"mock:sync\") // must use end to denote the end of the onCompletion route .end() // here the original route contiues .to(\"log:original\") .to(\"mock:result\");", "onCompletion().parallelProcessing() .to(\"mock:before\") .delay(1000) .setBody(simple(\"OnComplete:USD{body}\"));", "<onCompletion parallelProcessing=\"true\"> <to uri=\"before\"/> <delay><constant>1000</constant></delay> <setBody><simple>OnComplete:USD{body}<simple></setBody> </onCompletion>", "<onCompletion executorServiceRef=\"myThreadPool\" <to uri=\"before\"/> <delay><constant>1000</constant></delay> <setBody><simple>OnComplete:USD{body}</simple></setBody> </onCompletion>>", ".onCompletion().modeBeforeConsumer() .setHeader(\"createdBy\", constant(\"Someone\")) .end()", "<onCompletion mode=\"BeforeConsumer\"> <setHeader headerName=\"createdBy\"> <constant>Someone</constant> </setHeader> </onCompletion>", "from(\"file:src/data?noop=true\").routePolicy(new MetricsRoutePolicy()).to(\"jms:incomingOrders\");", "<bean id=\"policy\" class=\"org.apache.camel.component.metrics.routepolicy.MetricsRoutePolicy\"/> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route routePolicyRef=\"policy\"> <from uri=\"file:src/data?noop=true\"/> [...]", "context.addRoutePolicyFactory(new MetricsRoutePolicyFactory());", "<!-- use camel-metrics route policy to gather metrics for all routes --> <bean id=\"metricsRoutePolicyFactory\" class=\"org.apache.camel.component.metrics.routepolicy.MetricsRoutePolicyFactory\"/>", "MetricRegistryService registryService = context.hasService(MetricsRegistryService.class); if (registryService != null) { MetricsRegistry registry = registryService.getMetricsRegistry(); }", "<camelContext id=\"myCamel\" managementNamePattern=\"#name#\" > </camelContext>", "MyCamelBundle-1 MyCamelBundle-2 MyCamelBundle-3", "// Java context.getManagementNameStrategy().setNamePattern(\"#name#\");", "<camelContext id=\"myCamel\" managementNamePattern=\"#name#\">", "<camelContext id=\"fooContext\" managementNamePattern=\"FooApplication-#name#\"> </camelContext> <camelContext id=\"myCamel\" managementNamePattern=\"#bundleID#-#symbolicName#-#name#\"> </camelContext>", "<camelContext id=\"foo\" managementNamePattern=\" SameOldSameOld \"> ... </camelContext> <camelContext id=\"bar\" managementNamePattern=\" SameOldSameOld \"> ... </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/basicprinciples
Chapter 87. workbook
Chapter 87. workbook This chapter describes the commands under the workbook command. 87.1. workbook create Create new workbook. Usage: Table 87.1. Positional arguments Value Summary definition Workbook definition file Table 87.2. Command arguments Value Summary -h, --help Show this help message and exit --public With this flag workbook will be marked as "public". --namespace [NAMESPACE] Namespace to create the workbook within. Table 87.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 87.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 87.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 87.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 87.2. workbook definition show Show workbook definition. Usage: Table 87.7. Positional arguments Value Summary name Workbook name Table 87.8. Command arguments Value Summary -h, --help Show this help message and exit 87.3. workbook delete Delete workbook. Usage: Table 87.9. Positional arguments Value Summary workbook Name of workbook(s). Table 87.10. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to delete the workbook(s) from. 87.4. workbook list List all workbooks. Usage: Table 87.11. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 87.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 87.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 87.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 87.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 87.5. workbook show Show specific workbook. Usage: Table 87.16. Positional arguments Value Summary workbook Workbook name Table 87.17. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the workbook from. Table 87.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 87.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 87.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 87.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 87.6. workbook update Update workbook. Usage: Table 87.22. Positional arguments Value Summary definition Workbook definition file Table 87.23. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to update the workbook in. --public With this flag workbook will be marked as "public". Table 87.24. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 87.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 87.26. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 87.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 87.7. workbook validate Validate workbook. Usage: Table 87.28. Positional arguments Value Summary definition Workbook definition file Table 87.29. Command arguments Value Summary -h, --help Show this help message and exit Table 87.30. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 87.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 87.32. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 87.33. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack workbook create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--public] [--namespace [NAMESPACE]] definition", "openstack workbook definition show [-h] name", "openstack workbook delete [-h] [--namespace [NAMESPACE]] workbook [workbook ...]", "openstack workbook list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workbook show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] workbook", "openstack workbook update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] [--public] definition", "openstack workbook validate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] definition" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/workbook
Chapter 9. Detach volumes after non-graceful node shutdown
Chapter 9. Detach volumes after non-graceful node shutdown This feature allows drivers to automatically detach volumes when a node goes down non-gracefully. Important Detach CSI volumes after non-graceful node shutdown is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.1. Overview A graceful node shutdown occurs when the kubelet's node shutdown manager detects the upcoming node shutdown action. Non-graceful shutdowns occur when the kubelet does not detect a node shutdown action, which can occur because of system or hardware failures. Also, the kubelet may not detect a node shutdown action when the shutdown command does not trigger the Inhibitor Locks mechanism used by the kubelet on Linux, or because of a user error, for example, if the shutdownGracePeriod and shutdownGracePeriodCriticalPods details are not configured correctly for that node. With this feature, when a non-graceful node shutdown occurs, you can manually add an out-of-service taint on the node to allow volumes to automatically detach from the node. 9.2. Adding an out-of-service taint manually for automatic volume detachment Prerequisites Access to the cluster with cluster-admin privileges. Procedure To allow volumes to detach automatically from a node after a non-graceful node shutdown: After a node is detected as unhealthy, shut down the worker node. Ensure that the node is shutdown by running the following command and checking the status: oc get node <node name> 1 1 <node name> = name of the non-gracefully shutdown node Important If the node is not completely shut down, do not proceed with tainting the node. If the node is still up and the taint is applied, filesystem corruption can occur. Taint the corresponding node object by running the following command: Important Tainting a node this way deletes all pods on that node. This also causes any pods that are backed by statefulsets to be evicted, and replacement pods to be created on a different node. oc adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute 1 1 <node name> = name of the non-gracefully shutdown node After the taint is applied, the volumes detach from the shutdown node allowing their disks to be attached to a different node. Example The resulting YAML file resembles the following: spec: taints: - effect: NoExecute key: node.kubernetes.io/out-of-service value: nodeshutdown Restart the node. Remove the taint.
[ "get node <node name> 1", "adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute 1", "spec: taints: - effect: NoExecute key: node.kubernetes.io/out-of-service value: nodeshutdown" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/storage/ephemeral-storage-csi-vol-detach-non-graceful-shutdown
Chapter 1. Preparing to install on GCP
Chapter 1. Preparing to install on GCP 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on GCP Before installing OpenShift Container Platform on Google Cloud Platform (GCP), you must create a service account and configure a GCP project. See Configuring a GCP project for details about creating a project, enabling API services, configuring DNS, GCP account limits, and supported GCP regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating long-term credentials for GCP for other options. 1.3. Choosing a method to install OpenShift Container Platform on GCP You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on GCP : You can install OpenShift Container Platform on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on GCP : You can install a customized cluster on GCP infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on GCP with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on GCP in a restricted network : You can install OpenShift Container Platform on GCP on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the GCP APIs. Installing a cluster into an existing Virtual Private Cloud : You can install OpenShift Container Platform on an existing GCP Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits on creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing GCP VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on GCP infrastructure that you provision, by using one of the following methods: Installing a cluster on GCP with user-provisioned infrastructure : You can install OpenShift Container Platform on GCP infrastructure that you provide. You can use the provided Deployment Manager templates to assist with the installation. Installing a cluster with shared VPC on user-provisioned infrastructure in GCP : You can use the provided Deployment Manager templates to create GCP resources in a shared VPC infrastructure. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure : You can install OpenShift Container Platform on GCP in a restricted network with user-provisioned infrastructure. By creating an internal mirror of the installation release content, you can install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 1.4. steps Configuring a GCP project
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_gcp/preparing-to-install-on-gcp
Chapter 18. KubeControllerManager [operator.openshift.io/v1]
Chapter 18. KubeControllerManager [operator.openshift.io/v1] Description KubeControllerManager provides information to configure an operator to manage kube-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 18.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Kubernetes Controller Manager status object status is the most recently observed status of the Kubernetes Controller Manager 18.1.1. .spec Description spec is the specification of the desired behavior of the Kubernetes Controller Manager Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides useMoreSecureServiceCA boolean useMoreSecureServiceCA indicates that the service-ca.crt provided in SA token volumes should include only enough certificates to validate service serving certificates. Once set to true, it cannot be set to false. Even if someone finds a way to set it back to false, the service-ca.crt files that previously existed will only have the more secure content. 18.1.2. .status Description status is the most recently observed status of the Kubernetes Controller Manager Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 18.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 18.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 18.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 18.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 18.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 18.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 18.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubecontrollermanagers DELETE : delete collection of KubeControllerManager GET : list objects of kind KubeControllerManager POST : create a KubeControllerManager /apis/operator.openshift.io/v1/kubecontrollermanagers/{name} DELETE : delete a KubeControllerManager GET : read the specified KubeControllerManager PATCH : partially update the specified KubeControllerManager PUT : replace the specified KubeControllerManager /apis/operator.openshift.io/v1/kubecontrollermanagers/{name}/status GET : read status of the specified KubeControllerManager PATCH : partially update status of the specified KubeControllerManager PUT : replace status of the specified KubeControllerManager 18.2.1. /apis/operator.openshift.io/v1/kubecontrollermanagers Table 18.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of KubeControllerManager Table 18.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 18.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeControllerManager Table 18.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 18.5. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManagerList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeControllerManager Table 18.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.7. Body parameters Parameter Type Description body KubeControllerManager schema Table 18.8. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 201 - Created KubeControllerManager schema 202 - Accepted KubeControllerManager schema 401 - Unauthorized Empty 18.2.2. /apis/operator.openshift.io/v1/kubecontrollermanagers/{name} Table 18.9. Global path parameters Parameter Type Description name string name of the KubeControllerManager Table 18.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a KubeControllerManager Table 18.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 18.12. Body parameters Parameter Type Description body DeleteOptions schema Table 18.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeControllerManager Table 18.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 18.15. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeControllerManager Table 18.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.17. Body parameters Parameter Type Description body Patch schema Table 18.18. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeControllerManager Table 18.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.20. Body parameters Parameter Type Description body KubeControllerManager schema Table 18.21. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 201 - Created KubeControllerManager schema 401 - Unauthorized Empty 18.2.3. /apis/operator.openshift.io/v1/kubecontrollermanagers/{name}/status Table 18.22. Global path parameters Parameter Type Description name string name of the KubeControllerManager Table 18.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified KubeControllerManager Table 18.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 18.25. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeControllerManager Table 18.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.27. Body parameters Parameter Type Description body Patch schema Table 18.28. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeControllerManager Table 18.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.30. Body parameters Parameter Type Description body KubeControllerManager schema Table 18.31. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 201 - Created KubeControllerManager schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/kubecontrollermanager-operator-openshift-io-v1
4.7. SELinux Contexts - Labeling Files
4.7. SELinux Contexts - Labeling Files On systems running SELinux, all processes and files are labeled in a way that represents security-relevant information. This information is called the SELinux context. For files, this is viewed using the ls -Z command: In this example, SELinux provides a user ( unconfined_u ), a role ( object_r ), a type ( user_home_t ), and a level ( s0 ). This information is used to make access control decisions. On DAC systems, access is controlled based on Linux user and group IDs. SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first. Note By default, newly-created files and directories inherit the SELinux type of their parent directories. For example, when creating a new file in the /etc directory that is labeled with the etc_t type, the new file inherits the same type: SELinux provides multiple commands for managing the file system labeling, such as chcon , semanage fcontext , restorecon , and matchpathcon . 4.7.1. Temporary Changes: chcon The chcon command changes the SELinux context for files. However, changes made with the chcon command are not persistent across file-system relabels, or the execution of the restorecon command. SELinux policy controls whether users are able to modify the SELinux context for any given file. When using chcon , users provide all or part of the SELinux context to change. An incorrect file type is a common cause of SELinux denying access. Quick Reference Run the chcon -t type file-name command to change the file type, where type is an SELinux type, such as httpd_sys_content_t , and file-name is a file or directory name: Run the chcon -R -t type directory-name command to change the type of the directory and its contents, where type is an SELinux type, such as httpd_sys_content_t , and directory-name is a directory name: Procedure 4.6. Changing a File's or Directory's Type The following procedure demonstrates changing the type, and no other attributes of the SELinux context. The example in this section works the same for directories, for example, if file1 was a directory. Change into your home directory. Create a new file and view its SELinux context: In this example, the SELinux context for file1 includes the SELinux unconfined_u user, object_r role, user_home_t type, and the s0 level. For a description of each part of the SELinux context, see Chapter 2, SELinux Contexts . Enter the following command to change the type to samba_share_t . The -t option only changes the type. Then view the change: Use the following command to restore the SELinux context for the file1 file. Use the -v option to view what changes: In this example, the type, samba_share_t , is restored to the correct, user_home_t type. When using targeted policy (the default SELinux policy in Red Hat Enterprise Linux), the restorecon command reads the files in the /etc/selinux/targeted/contexts/files/ directory, to see which SELinux context files should have. Procedure 4.7. Changing a Directory and its Contents Types The following example demonstrates creating a new directory, and changing the directory's file type along with its contents to a type used by the Apache HTTP Server. The configuration in this example is used if you want Apache HTTP Server to use a different document root (instead of /var/www/html/ ): As the root user, create a new web/ directory and then 3 empty files ( file1 , file2 , and file3 ) within this directory. The web/ directory and files in it are labeled with the default_t type: As root, enter the following command to change the type of the web/ directory (and its contents) to httpd_sys_content_t : To restore the default SELinux contexts, use the restorecon utility as root: See the chcon (1) manual page for further information about chcon . Note Type Enforcement is the main permission control used in SELinux targeted policy. For the most part, SELinux users and roles can be ignored. 4.7.2. Persistent Changes: semanage fcontext The semanage fcontext command is used to change the SELinux context of files. To show contexts to newly created files and directories, enter the following command as root: Changes made by semanage fcontext are used by the following utilities. The setfiles utility is used when a file system is relabeled and the restorecon utility restores the default SELinux contexts. This means that changes made by semanage fcontext are persistent, even if the file system is relabeled. SELinux policy controls whether users are able to modify the SELinux context for any given file. Quick Reference To make SELinux context changes that survive a file system relabel: Enter the following command, remembering to use the full path to the file or directory: Use the restorecon utility to apply the context changes: Use of regular expressions with semanage fcontext For the semanage fcontext command to work correctly, you can use either a fully qualified path or Perl-compatible regular expressions ( PCRE ) . The only PCRE flag in use is PCRE2_DOTALL , which causes the . wildcard to match anything, including a new line. Strings representing paths are processed as bytes, meaning that non-ASCII characters are not matched by a single wildcard. Note that file-context definitions specified using semanage fcontext are evaluated in reverse order to how they were defined: the latest entry is evaluated first regardless of the stem length. Local file context modifications stored in file_contexts.local have a higher priority than those specified in policy modules. This means that whenever a match for a given file path is found in file_contexts.local , no other file-context definitions are considered. Important File-context definitions specified using the semanage fcontext command effectively override all other file-context definitions. All regular expressions should therefore be as specific as possible to avoid unintentionally impacting other parts of the file system. For more information on a type of regular expression used in file-context definitions and flags in effect, see the semanage-fcontext(8) man page. Procedure 4.8. Changing a File's or Directory 's Type The following example demonstrates changing a file's type, and no other attributes of the SELinux context. This example works the same for directories, for instance if file1 was a directory. As the root user, create a new file in the /etc directory. By default, newly-created files in /etc are labeled with the etc_t type: To list information about a directory, use the following command: As root, enter the following command to change the file1 type to samba_share_t . The -a option adds a new record, and the -t option defines a type ( samba_share_t ). Note that running this command does not directly change the type; file1 is still labeled with the etc_t type: As root, use the restorecon utility to change the type. Because semanage added an entry to file_contexts.local for /etc/file1 , restorecon changes the type to samba_share_t : Procedure 4.9. Changing a Directory and its Contents Types The following example demonstrates creating a new directory, and changing the directory's file type along with its contents to a type used by Apache HTTP Server. The configuration in this example is used if you want Apache HTTP Server to use a different document root instead of /var/www/html/ : As the root user, create a new web/ directory and then 3 empty files ( file1 , file2 , and file3 ) within this directory. The web/ directory and files in it are labeled with the default_t type: As root, enter the following command to change the type of the web/ directory and the files in it, to httpd_sys_content_t . The -a option adds a new record, and the -t option defines a type ( httpd_sys_content_t ). The "/web(/.*)?" regular expression causes semanage to apply changes to web/ , as well as the files in it. Note that running this command does not directly change the type; web/ and files in it are still labeled with the default_t type: The semanage fcontext -a -t httpd_sys_content_t "/web(/.*)?" command adds the following entry to /etc/selinux/targeted/contexts/files/file_contexts.local : As root, use the restorecon utility to change the type of web/ , as well as all files in it. The -R is for recursive, which means all files and directories under web/ are labeled with the httpd_sys_content_t type. Since semanage added an entry to file.contexts.local for /web(/.*)? , restorecon changes the types to httpd_sys_content_t : Note that by default, newly-created files and directories inherit the SELinux type of their parent directories. Procedure 4.10. Deleting an added Context The following example demonstrates adding and removing an SELinux context. If the context is part of a regular expression, for example, /web(/.*)? , use quotation marks around the regular expression: To remove the context, as root, enter the following command, where file-name | directory-name is the first part in file_contexts.local : The following is an example of a context in file_contexts.local : With the first part being test . To prevent the test/ directory from being labeled with the httpd_sys_content_t after running restorecon , or after a file system relabel, enter the following command as root to delete the context from file_contexts.local : As root, use the restorecon utility to restore the default SELinux context. For further information about semanage , see the semanage (8) and semanage-fcontext (8) manual pages. Important When changing the SELinux context with semanage fcontext -a , use the full path to the file or directory to avoid files being mislabeled after a file system relabel, or after the restorecon command is run. 4.7.3. How File Context is Determined Determining file context is based on file-context definitions, which are specified in the system security policy (the .fc files). Based on the system policy, semanage generates file_contexts.homedirs and file_contexts files. System administrators can customize file-context definitions using the semanage fcontext command. Such customizations are stored in the file_contexts.local file. When a labeling utility, such as matchpathcon or restorecon , is determining the proper label for a given path, it searches for local changes first ( file_contexts.local ). If the utility does not find a matching pattern, it searches the file_contexts.homedirs file and finally the file_contexts file. However, whenever a match for a given file path is found, the search ends, the utility does look for any additional file-context definitions. This means that home directory-related file contexts have higher priority than the rest, and local customizations override the system policy. File-context definitions specified by system policy (contents of file_contexts.homedirs and file_contexts files) are sorted by the length of the stem (prefix of the path before any wildcard) before evaluation. This means that the most specific path is chosen. However, file-context definitions specified using semanage fcontext are evaluated in reverse order to how they were defined: the latest entry is evaluated first regardless of the stem length. For more information on: changing the context of a file by using chcon , see Section 4.7.1, "Temporary Changes: chcon" . changing and adding a file-context definition by using semanage fcontext , see Section 4.7.2, "Persistent Changes: semanage fcontext" . changing and adding a file-context definition through a system-policy operation, see Section 4.10, "Maintaining SELinux Labels" or Section 4.12, "Prioritizing and Disabling SELinux Policy Modules" .
[ "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD ls -dZ - /etc drwxr-xr-x. root root system_u:object_r: etc_t :s0 /etc", "~]# touch /etc/file1", "~]# ls -lZ /etc/file1 -rw-r--r--. root root unconfined_u:object_r: etc_t :s0 /etc/file1", "~]USD chcon -t httpd_sys_content_t file-name", "~]USD chcon -R -t httpd_sys_content_t directory-name", "~]USD touch file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD chcon -t samba_share_t file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:samba_share_t:s0 file1", "~]USD restorecon -v file1 restorecon reset file1 context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:user_home_t:s0", "~]# mkdir /web", "~]# touch /web/file{1,2,3}", "~]# ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web", "~]# ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3", "~]# chcon -R -t httpd_sys_content_t /web/", "~]# ls -dZ /web/ drwxr-xr-x root root unconfined_u:object_r:httpd_sys_content_t:s0 /web/", "~]# ls -lZ /web/ -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file3", "~]# restorecon -R -v /web/ restorecon reset /web context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file2 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file3 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file1 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0", "~]# semanage fcontext -C -l", "~]# semanage fcontext -a options file-name | directory-name", "~]# restorecon -v file-name | directory-name", "~]# touch /etc/file1", "~]USD ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1", "~]USD ls -dZ directory_name", "~]# semanage fcontext -a -t samba_share_t /etc/file1", "~]# ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1", "~]USD semanage fcontext -C -l /etc/file1 unconfined_u:object_r:samba_share_t:s0", "~]# restorecon -v /etc/file1 restorecon reset /etc/file1 context unconfined_u:object_r:etc_t:s0->system_u:object_r:samba_share_t:s0", "~]# mkdir /web", "~]# touch /web/file{1,2,3}", "~]# ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web", "~]# ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3", "~]# semanage fcontext -a -t httpd_sys_content_t \"/web(/.*)?\"", "~]USD ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web", "~]USD ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3", "/web(/.*)? system_u:object_r:httpd_sys_content_t:s0", "~]# restorecon -R -v /web restorecon reset /web context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file2 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file3 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file1 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0", "~]# semanage fcontext -d \"/web(/.*)?\"", "~]# semanage fcontext -d file-name | directory-name", "/test system_u:object_r:httpd_sys_content_t:s0", "~]# semanage fcontext -d /test" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-selinux_contexts_labeling_files
Chapter 31. Integrating non-corosync nodes into a cluster: the pacemaker_remote service
Chapter 31. Integrating non-corosync nodes into a cluster: the pacemaker_remote service The pacemaker_remote service allows nodes not running corosync to integrate into the cluster and have the cluster manage their resources just as if they were real cluster nodes. Among the capabilities that the pacemaker_remote service provides are the following: The pacemaker_remote service allows you to scale beyond the Red Hat support limit of 32 nodes. The pacemaker_remote service allows you to manage a virtual environment as a cluster resource and also to manage individual services within the virtual environment as cluster resources. The following terms are used to describe the pacemaker_remote service. cluster node - A node running the High Availability services ( pacemaker and corosync ). remote node - A node running pacemaker_remote to remotely integrate into the cluster without requiring corosync cluster membership. A remote node is configured as a cluster resource that uses the ocf:pacemaker:remote resource agent. guest node - A virtual guest node running the pacemaker_remote service. The virtual guest resource is managed by the cluster; it is both started by the cluster and integrated into the cluster as a remote node. pacemaker_remote - A service daemon capable of performing remote application management within remote nodes and KVM guest nodes in a Pacemaker cluster environment. This service is an enhanced version of Pacemaker's local executor daemon ( pacemaker-execd ) that is capable of managing resources remotely on a node not running corosync. A Pacemaker cluster running the pacemaker_remote service has the following characteristics. Remote nodes and guest nodes run the pacemaker_remote service (with very little configuration required on the virtual machine side). The cluster stack ( pacemaker and corosync ), running on the cluster nodes, connects to the pacemaker_remote service on the remote nodes, allowing them to integrate into the cluster. The cluster stack ( pacemaker and corosync ), running on the cluster nodes, launches the guest nodes and immediately connects to the pacemaker_remote service on the guest nodes, allowing them to integrate into the cluster. The key difference between the cluster nodes and the remote and guest nodes that the cluster nodes manage is that the remote and guest nodes are not running the cluster stack. This means the remote and guest nodes have the following limitations: they do not take place in quorum they do not execute fencing device actions they are not eligible to be the cluster's Designated Controller (DC) they do not themselves run the full range of pcs commands On the other hand, remote nodes and guest nodes are not bound to the scalability limits associated with the cluster stack. Other than these noted limitations, the remote and guest nodes behave just like cluster nodes in respect to resource management, and the remote and guest nodes can themselves be fenced. The cluster is fully capable of managing and monitoring resources on each remote and guest node: You can build constraints against them, put them in standby, or perform any other action you perform on cluster nodes with the pcs commands. Remote and guest nodes appear in cluster status output just as cluster nodes do. 31.1. Host and guest authentication of pacemaker_remote nodes The connection between cluster nodes and pacemaker_remote is secured using Transport Layer Security (TLS) with pre-shared key (PSK) encryption and authentication over TCP (using port 3121 by default). This means both the cluster node and the node running pacemaker_remote must share the same private key. By default this key must be placed at /etc/pacemaker/authkey on both cluster nodes and remote nodes. The pcs cluster node add-guest command sets up the authkey for guest nodes and the pcs cluster node add-remote command sets up the authkey for remote nodes. 31.2. Configuring KVM guest nodes A Pacemaker guest node is a virtual guest node running the pacemaker_remote service. The virtual guest node is managed by the cluster. 31.2.1. Guest node resource options When configuring a virtual machine to act as a guest node, you create a VirtualDomain resource, which manages the virtual machine. For descriptions of the options you can set for a VirtualDomain resource, see the "Resource Options for Virtual Domain Resources" table in Virtual domain resource options . In addition to the VirtualDomain resource options, metadata options define the resource as a guest node and define the connection parameters. You set these resource options with the pcs cluster node add-guest command. The following table describes these metadata options. Table 31.1. Metadata Options for Configuring KVM Resources as Remote Nodes Field Default Description remote-node <none> The name of the guest node this resource defines. This both enables the resource as a guest node and defines the unique name used to identify the guest node. WARNING : This value cannot overlap with any resource or node IDs. remote-port 3121 Configures a custom port to use for the guest connection to pacemaker_remote remote-addr The address provided in the pcs host auth command The IP address or host name to connect to remote-connect-timeout 60s Amount of time before a pending guest connection will time out 31.2.2. Integrating a virtual machine as a guest node The following procedure is a high-level summary overview of the steps to perform to have Pacemaker launch a virtual machine and to integrate that machine as a guest node, using libvirt and KVM virtual guests. Procedure Configure the VirtualDomain resources. Enter the following commands on every virtual machine to install pacemaker_remote packages, start the pcsd service and enable it to run on startup, and allow TCP port 3121 through the firewall. Give each virtual machine a static network address and unique host name, which should be known to all nodes. If you have not already done so, authenticate pcs to the node you will be integrating as a quest node. Use the following command to convert an existing VirtualDomain resource into a guest node. This command must be run on a cluster node and not on the guest node which is being added. In addition to converting the resource, this command copies the /etc/pacemaker/authkey to the guest node and starts and enables the pacemaker_remote daemon on the guest node. The node name for the guest node, which you can define arbitrarily, can differ from the host name for the node. After creating the VirtualDomain resource, you can treat the guest node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the guest node as in the following commands, which are run from a cluster node. You can include guest nodes in groups, which allows you to group a storage device, file system, and VM. 31.3. Configuring Pacemaker remote nodes A remote node is defined as a cluster resource with ocf:pacemaker:remote as the resource agent. You create this resource with the pcs cluster node add-remote command. 31.3.1. Remote node resource options The following table describes the resource options you can configure for a remote resource. Table 31.2. Resource Options for Remote Nodes Field Default Description reconnect_interval 0 Time in seconds to wait before attempting to reconnect to a remote node after an active connection to the remote node has been severed. This wait is recurring. If reconnect fails after the wait period, a new reconnect attempt will be made after observing the wait time. When this option is in use, Pacemaker will keep attempting to reach out and connect to the remote node indefinitely after each wait interval. server Address specified with pcs host auth command Server to connect to. This can be an IP address or host name. port TCP port to connect to. 31.3.2. Remote node configuration overview The following procedure provides a high-level summary overview of the steps to perform to configure a Pacemaker Remote node and to integrate that node into an existing Pacemaker cluster environment. Procedure On the node that you will be configuring as a remote node, allow cluster-related services through the local firewall. Note If you are using iptables directly, or some other firewall solution besides firewalld , simply open the following ports: TCP ports 2224 and 3121. Install the pacemaker_remote daemon on the remote node. Start and enable pcsd on the remote node. If you have not already done so, authenticate pcs to the node you will be adding as a remote node. Add the remote node resource to the cluster with the following command. This command also syncs all relevant configuration files to the new node, starts the node, and configures it to start pacemaker_remote on boot. This command must be run on a cluster node and not on the remote node which is being added. After adding the remote resource to the cluster, you can treat the remote node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the remote node as in the following commands, which are run from a cluster node. Warning Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint. Configure fencing resources for the remote node. Remote nodes are fenced the same way as cluster nodes. Configure fencing resources for use with remote nodes the same as you would with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only cluster nodes are capable of actually executing a fencing operation against another node. 31.4. Changing the default port location If you need to change the default port location for either Pacemaker or pacemaker_remote , you can set the PCMK_remote_port environment variable that affects both of these daemons. This environment variable can be enabled by placing it in the /etc/sysconfig/pacemaker file as follows. When changing the default port used by a particular guest node or remote node, the PCMK_remote_port variable must be set in that node's /etc/sysconfig/pacemaker file, and the cluster resource creating the guest node or remote node connection must also be configured with the same port number (using the remote-port metadata option for guest nodes, or the port option for remote nodes). 31.5. Upgrading systems with pacemaker_remote nodes If the pacemaker_remote service is stopped on an active Pacemaker Remote node, the cluster will gracefully migrate resources off the node before stopping the node. This allows you to perform software upgrades and other routine maintenance procedures without removing the node from the cluster. Once pacemaker_remote is shut down, however, the cluster will immediately try to reconnect. If pacemaker_remote is not restarted within the resource's monitor timeout, the cluster will consider the monitor operation as failed. If you wish to avoid monitor failures when the pacemaker_remote service is stopped on an active Pacemaker Remote node, you can use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote . Procedure Stop the node's connection resource with the pcs resource disable resourcename command, which will move all services off the node. The connection resource would be the ocf:pacemaker:remote resource for a remote node or, commonly, the ocf:heartbeat:VirtualDomain resource for a guest node. For guest nodes, this command will also stop the VM, so the VM must be started outside the cluster (for example, using virsh ) to perform any maintenance. Perform the required maintenance. When ready to return the node to the cluster, re-enable the resource with the pcs resource enable command.
[ "dnf install pacemaker-remote resource-agents pcs systemctl start pcsd.service systemctl enable pcsd.service firewall-cmd --add-port 3121/tcp --permanent firewall-cmd --add-port 2224/tcp --permanent firewall-cmd --reload", "pcs host auth nodename", "pcs cluster node add-guest nodename resource_id [ options ]", "pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers nodename", "firewall-cmd --permanent --add-service=high-availability success firewall-cmd --reload success", "dnf install -y pacemaker-remote resource-agents pcs", "systemctl start pcsd.service systemctl enable pcsd.service", "pcs host auth remote1", "pcs cluster node add-remote remote1", "pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers remote1", "\\#==#==# Pacemaker Remote # Specify a custom port for Pacemaker Remote connections PCMK_remote_port=3121", "pcs resource disable resourcename", "pcs resource enable resourcename" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_remote-node-management-configuring-and-managing-high-availability-clusters
Network Observability
Network Observability OpenShift Container Platform 4.15 Configuring and using the Network Observability Operator in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_observability/index
Chapter 10. Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1]
Chapter 10. Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3RemediationTemplate is the Schema for the metal3remediationtemplates API. Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Metal3RemediationTemplateSpec defines the desired state of Metal3RemediationTemplate. status object Metal3RemediationTemplateStatus defines the observed state of Metal3RemediationTemplate. 10.1.1. .spec Description Metal3RemediationTemplateSpec defines the desired state of Metal3RemediationTemplate. Type object Required template Property Type Description template object Metal3RemediationTemplateResource describes the data needed to create a Metal3Remediation from a template. 10.1.2. .spec.template Description Metal3RemediationTemplateResource describes the data needed to create a Metal3Remediation from a template. Type object Required spec Property Type Description spec object Spec is the specification of the desired behavior of the Metal3Remediation. 10.1.3. .spec.template.spec Description Spec is the specification of the desired behavior of the Metal3Remediation. Type object Property Type Description strategy object Strategy field defines remediation strategy. 10.1.4. .spec.template.spec.strategy Description Strategy field defines remediation strategy. Type object Property Type Description retryLimit integer Sets maximum number of remediation retries. timeout string Sets the timeout between remediation retries. type string Type of remediation. 10.1.5. .status Description Metal3RemediationTemplateStatus defines the observed state of Metal3RemediationTemplate. Type object Required status Property Type Description status object Metal3RemediationStatus defines the observed state of Metal3Remediation 10.1.6. .status.status Description Metal3RemediationStatus defines the observed state of Metal3Remediation Type object Property Type Description lastRemediated string LastRemediated identifies when the host was last remediated phase string Phase represents the current phase of machine remediation. E.g. Pending, Running, Done etc. retryCount integer RetryCount can be used as a counter during the remediation. Field can hold number of reboots etc. 10.2. API endpoints The following API endpoints are available: /apis/infrastructure.cluster.x-k8s.io/v1beta1/metal3remediationtemplates GET : list objects of kind Metal3RemediationTemplate /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates DELETE : delete collection of Metal3RemediationTemplate GET : list objects of kind Metal3RemediationTemplate POST : create a Metal3RemediationTemplate /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name} DELETE : delete a Metal3RemediationTemplate GET : read the specified Metal3RemediationTemplate PATCH : partially update the specified Metal3RemediationTemplate PUT : replace the specified Metal3RemediationTemplate /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name}/status GET : read status of the specified Metal3RemediationTemplate PATCH : partially update status of the specified Metal3RemediationTemplate PUT : replace status of the specified Metal3RemediationTemplate 10.2.1. /apis/infrastructure.cluster.x-k8s.io/v1beta1/metal3remediationtemplates HTTP method GET Description list objects of kind Metal3RemediationTemplate Table 10.1. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplateList schema 401 - Unauthorized Empty 10.2.2. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates HTTP method DELETE Description delete collection of Metal3RemediationTemplate Table 10.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Metal3RemediationTemplate Table 10.3. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplateList schema 401 - Unauthorized Empty HTTP method POST Description create a Metal3RemediationTemplate Table 10.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.5. Body parameters Parameter Type Description body Metal3RemediationTemplate schema Table 10.6. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 201 - Created Metal3RemediationTemplate schema 202 - Accepted Metal3RemediationTemplate schema 401 - Unauthorized Empty 10.2.3. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name} Table 10.7. Global path parameters Parameter Type Description name string name of the Metal3RemediationTemplate HTTP method DELETE Description delete a Metal3RemediationTemplate Table 10.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Metal3RemediationTemplate Table 10.10. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Metal3RemediationTemplate Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.12. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Metal3RemediationTemplate Table 10.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.14. Body parameters Parameter Type Description body Metal3RemediationTemplate schema Table 10.15. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 201 - Created Metal3RemediationTemplate schema 401 - Unauthorized Empty 10.2.4. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name}/status Table 10.16. Global path parameters Parameter Type Description name string name of the Metal3RemediationTemplate HTTP method GET Description read status of the specified Metal3RemediationTemplate Table 10.17. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Metal3RemediationTemplate Table 10.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.19. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Metal3RemediationTemplate Table 10.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.21. Body parameters Parameter Type Description body Metal3RemediationTemplate schema Table 10.22. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 201 - Created Metal3RemediationTemplate schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/provisioning_apis/metal3remediationtemplate-infrastructure-cluster-x-k8s-io-v1beta1
Appendix E. Command Line Tools Summary
Appendix E. Command Line Tools Summary Table E.1, "Command Line Tool Summary" summarizes preferred command-line tools for configuring and managing the High Availability Add-On. For more information about commands and variables, see the man page for each command-line tool. Table E.1. Command Line Tool Summary Command Line Tool Used With Purpose ccs_config_dump - Cluster Configuration Dump Tool Cluster Infrastructure ccs_config_dump generates XML output of running configuration. The running configuration is, sometimes, different from the stored configuration on file because some subsystems store or set some default information into the configuration. Those values are generally not present on the on-disk version of the configuration but are required at runtime for the cluster to work properly. For more information about this tool, see the ccs_config_dump(8) man page. ccs_config_validate - Cluster Configuration Validation Tool Cluster Infrastructure ccs_config_validate validates cluster.conf against the schema, cluster.rng (located in /usr/share/cluster/cluster.rng on each node). For more information about this tool, see the ccs_config_validate(8) man page. clustat - Cluster Status Utility High-availability Service Management Components The clustat command displays the status of the cluster. It shows membership information, quorum view, and the state of all configured user services. For more information about this tool, see the clustat(8) man page. clusvcadm - Cluster User Service Administration Utility High-availability Service Management Components The clusvcadm command allows you to enable, disable, relocate, and restart high-availability services in a cluster. For more information about this tool, see the clusvcadm(8) man page. cman_tool - Cluster Management Tool Cluster Infrastructure cman_tool is a program that manages the CMAN cluster manager. It provides the capability to join a cluster, leave a cluster, kill a node, or change the expected quorum votes of a node in a cluster. For more information about this tool, see the cman_tool(8) man page. fence_tool - Fence Tool Cluster Infrastructure fence_tool is a program used to join and leave the fence domain. For more information about this tool, see the fence_tool(8) man page.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ap-cli-tools-ca
Chapter 25. Graphics tablets
Chapter 25. Graphics tablets To manage Wacom tablets connected to your system, use the following tools: The gnome-settings-daemon service The Wacom Tablet settings panel in the GNOME environment The Wacom Tablet settings panel for a tablet The Wacom Tablet settings panel for a grip pen Both these tools, as well as the libinput stack, use the libwacom tablet client library, which stores the data about Wacom tablets. If you want to add support for a new tablet into the libwacom library, you must ensure that a definition file for this new tablet exists. 25.1. Preparing a tablet definition file You must prepare a definition file for the tablet you want to add. Prerequisites List all local devices recognized by libwacom : Make sure that your device is recognized in the output. If your device is not listed, the device is missing from the libwacom database. However, the device might still be visible as an event device in the kernel under /proc/bus/input/devices , and if you use the X.Org display server, in the X11 session on the xinput list. Procedure Install the package that provides tablet definition files: The package installs tablet definitions in the /usr/share/libwacom/ directory. Check whether the definition file is available in the /usr/share/libwacom/ directory. To use the screen mapping correctly, support for your tablet must be included in the libwacom database and in the udev rules file. Important A common indicator that a device is not supported by libwacom is that it works normally in a GNOME session, but the device is not correctly mapped to the screen. If the definition file for your device is not available in /usr/share/libwacom/ , you have these options: The required definition file may already be available in the linuxwacom/libwacom upstream repository. You can try to find the definition file there. If you find your tablet model in the list, copy the file to the local machine. You can create a new tablet definition file. Use the data/wacom.example file below, and edit particular lines based on the characteristics of your device. Example 25.1. Example model file description for a tablet 25.2. Adding support for a new tablet You can add support for a new tablet into the libwacom tablet information client library by adding the definition file for the tablet that you want to add. Prerequisites The definition file for the tablet that you want to add exists. For more information about ensuring that the definition file exists, see Section 25.1, "Preparing a tablet definition file" . Procedure Add and install the definition file with the .tablet suffix: After it is installed, the tablet is part of the libwacom database. The tablet is then available through libwacom-list-local-devices . Create a new /etc/udev/rules/99-libwacom-override.rules file with the following content so that your settings are not overwritten: Reboot your system. 25.3. Listing available Wacom tablet configuration paths Wacom tablet and stylus configuration files are saved in the following locations by default: Tablet configuration /org/gnome/settings-daemon/peripherals/wacom/ <D-Bus_machine - id> - <device_id> Wacom tablet configuration schema org.gnome.settings-daemon.peripherals.wacom Stylus configuration /org/gnome/settings-daemon/peripherals/wacom/ <device_id> / <tool_id> . If your product range does not support <tool_id> , a generic identifier is used instead. Stylus configuration schema for org.gnome.settings-daemon.peripherals.wacom.stylus Eraser configuration schema org.gnome.settings-daemon.peripherals.wacom.eraser Prerequisites The gnome-settings-daemon package is installed on your system. Procedure List all tablet configuration paths used on your system: Important Using machine-id , device-id , and tool-id in configuration paths allows for shared home directories with independent tablet configuration per system. However, when sharing home directories between systems, the Wacom settings apply only to one system. This is because the machine-id for your Wacom tablet is included in the configuration path of the /org/gnome/settings-daemon/peripherals/wacom/machine-id-device-id GSettings key, which stores your tablet settings.
[ "libwacom-list-local-devices", "yum install libwacom-data", "The product is the product name announced by the kernel Product=Intuos 4 WL 6x9 Vendor name of this tablet Vendor=Wacom DeviceMatch includes the bus (usb, serial), the vendor ID and the actual product ID DeviceMatch=usb:056a:00bc Class of the tablet. Valid classes include Intuos3, Intuos4, Graphire, Bamboo, Cintiq Class=Intuos4 Exact model of the tablet, not including the size. Model=Intuos 4 Wireless Width in inches, as advertised by the manufacturer Width=9 Height in inches, as advertised by the manufacturer Height=6 Optional features that this tablet supports Some features are dependent on the actual tool used, e.g. not all styli have an eraser and some styli have additional custom axes (e.g. the airbrush pen). These features describe those available on the tablet. # Features not set in a file default to false/0 This tablet supports styli (and erasers, if present on the actual stylus) Stylus=true This tablet supports touch. Touch=false This tablet has a touch ring (Intuos4 and Cintiq 24HD) Ring=true This tablet has a second touch ring (Cintiq 24HD) Ring2=false This tablet has a vertical/horizontal scroll strip VStrip=false HStrip=false Number of buttons on the tablet Buttons=9 This tablet is built-in (most serial tablets, Cintiqs) BuiltIn=false", "cp <tablet_definition_file>.tablet /usr/share/libwacom/", "ACTION!=\"add|change\", GOTO=\"libwacom_end\" KERNEL!=\"event[0-9]*\", GOTO=\"libwacom_end\" [new tablet match entries go here] LABEL=\"libwacom_end\"", "/usr/libexec/gsd-list-wacom" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/tablets_using-the-desktop-environment-in-rhel-8
Backup and restore
Backup and restore Red Hat OpenShift Service on AWS 4 Backing up and restoring your Red Hat OpenShift Service on AWS cluster Red Hat OpenShift Documentation Team
[ "Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.", "found a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\".", "data path restore failed: Failed to run kopia restore: Unable to load snapshot : snapshot not found", "The generated label name is too long.", "velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps", "velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps", "oc get dpa -n openshift-adp -o yaml > dpa.orig.backup", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'", "{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin", "024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...", "oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl", "oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'", "oc create namespace hello-world", "oc new-app -n hello-world --image=docker.io/openshift/hello-openshift", "oc expose service/hello-openshift -n hello-world", "curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`", "Hello OpenShift!", "cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF", "watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"", "{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }", "oc delete ns hello-world", "cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF", "watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"", "{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }", "oc -n hello-world get pods", "NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s", "curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`", "Hello OpenShift!", "oc delete ns hello-world", "oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa", "oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp", "oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge", "oc -n openshift-adp delete subscription oadp-operator", "oc delete ns openshift-adp", "oc delete backups.velero.io hello-world", "velero backup delete hello-world", "for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done", "aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive", "aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp", "aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"", "aws iam delete-role --role-name \"USD{ROLE_NAME}\"", "apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3", "oc apply -f <restore_cr_filename>", "oc describe restores.velero.io <restore_name> -n openshift-adp", "oc project test-restore-application", "oc get pvc,svc,deployment,secret,configmap", "NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s", "export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\" export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH} echo \"Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"", "POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}\" --output text) 1", "if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name \"RosaOadpVer1\" --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp --output text) fi", "echo USD{POLICY_ARN}", "cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF", "ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp --query Role.Arn --output text)", "echo USD{ROLE_ARN}", "aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}", "cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF", "oc create namespace openshift-adp", "oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials", "cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h", "cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF", "cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF", "nodeAgent: enable: false uploaderType: restic", "restic: enable: false", "oc get sub -o yaml redhat-oadp-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: annotations: creationTimestamp: \"2025-01-15T07:18:31Z\" generation: 1 labels: operators.coreos.com/redhat-oadp-operator.openshift-adp: \"\" name: redhat-oadp-operator namespace: openshift-adp resourceVersion: \"77363\" uid: 5ba00906-5ad2-4476-ae7b-ffa90986283d spec: channel: stable-1.4 config: env: - name: ROLEARN value: arn:aws:iam::11111111:role/wrong-role-arn 1 installPlanApproval: Manual name: redhat-oadp-operator source: prestage-operators sourceNamespace: openshift-marketplace startingCSV: oadp-operator.v1.4.2", "oc patch subscription redhat-oadp-operator -p '{\"spec\": {\"config\": {\"env\": [{\"name\": \"ROLEARN\", \"value\": \"<role_arn>\"}]}}}' --type='merge'", "oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d", "[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-rosa-dpa namespace: openshift-adp spec: backupLocations: - bucket: config: region: us-east-1 cloudStorageRef: name: <cloud_storage> 1 credential: name: cloud-credentials key: credentials prefix: velero default: true configuration: velero: defaultPlugins: - aws - openshift", "oc create -f <dpa_manifest_file>", "oc get dpa -n openshift-adp -o yaml", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication status: conditions: - lastTransitionTime: \"2023-07-31T04:48:12Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true", "velero backup create <backup-name> --snapshot-volumes false 1", "velero describe backup <backup_name> --details 1", "velero restore create --from-backup <backup-name> 1", "velero describe restore <restore_name> --details 1", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s labelSelector: 5 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 6 - matchLabels: app: <label_1> app: <label_2> app: <label_3>", "oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}'", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11", "oc get backupStorageLocations -n openshift-adp", "NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m", "cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s EOF", "schedule: \"*/10 * * * *\"", "oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'", "apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1", "oc apply -f <deletebackuprequest_cr_filename>", "velero backup delete <backup_name> -n openshift-adp 1", "pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m", "not due for full maintenance cycle until 2024-00-00 18:29:4", "oc get backuprepositories.velero.io -n openshift-adp", "oc delete backuprepository <backup_repository_name> -n openshift-adp 1", "velero backup create <backup-name> --snapshot-volumes false 1", "velero describe backup <backup_name> --details 1", "velero restore create --from-backup <backup-name> 1", "velero describe restore <restore_name> --details 1", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3", "oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}'", "oc get all -n <namespace> 1", "bash dc-restic-post-restore.sh -> dc-post-restore.sh", "#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo \"restore: USD1\" label=USD(label_name USD1) echo \"label: USDlabel\" echo Deleting disconnected restore pods delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9", "velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps", "velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/backup_and_restore/index
Builds using BuildConfig
Builds using BuildConfig OpenShift Container Platform 4.14 Builds Red Hat OpenShift Documentation Team
[ "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"", "source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4", "source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1", "source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar", "oc secrets link builder dockerhub", "source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3", "source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'", "kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'", "apiVersion: \"build.openshift.io/v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"", "oc set build-secret --source bc/sample-build basicsecret", "oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>", "[http] sslVerify=false", "cat .gitconfig", "[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt", "oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth", "ssh-keygen -t ed25519 -C \"[email protected]\"", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth", "cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt", "oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth", "oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "oc create -f <filename>", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <your_yaml_file>.yaml", "oc logs secret-example-pod", "oc delete pod secret-example-pod", "apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username", "oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>", "apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>", "oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth", "apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"", "FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]", "#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar", "#!/bin/sh exec java -jar app.jar", "FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]", "auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"", "oc set build-secret --push bc/sample-build dockerhub", "oc secrets link builder dockerhub", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"", "oc set build-secret --pull bc/sample-build dockerhub", "oc secrets link builder dockerhub", "env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret", "spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"", "spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"", "spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\"", "strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"", "strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile", "dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "dockerStrategy: buildArgs: - name: \"foo\" value: \"bar\"", "strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers", "spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1", "sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"", "strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"", "customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "oc set env <enter_variables>", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1", "jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"", "oc project <project_name>", "oc new-app jenkins-ephemeral 1", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline", "def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }", "oc create -f nodejs-sample-pipeline.yaml", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml", "oc start-build nodejs-sample-pipeline", "FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]", "FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build", "#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}", "oc new-build --binary --strategy=docker --name custom-builder-image", "oc start-build custom-builder-image --from-dir . -F", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest", "oc create -f buildconfig.yaml", "kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}", "oc create -f imagestream.yaml", "oc start-build sample-custom-build -F", "oc start-build <buildconfig_name>", "oc start-build --from-build=<build_name>", "oc start-build <buildconfig_name> --follow", "oc start-build <buildconfig_name> --env=<key>=<value>", "oc start-build hello-world --from-repo=../hello-world --commit=v2", "oc cancel-build <build_name>", "oc cancel-build <build1_name> <build2_name> <build3_name>", "oc cancel-build bc/<buildconfig_name>", "oc cancel-build bc/<buildconfig_name>", "oc delete bc <BuildConfigName>", "oc delete --cascade=false bc <BuildConfigName>", "oc describe build <build_name>", "oc describe build <build_name>", "oc logs -f bc/<buildconfig_name>", "oc logs --version=<number> bc/<buildconfig_name>", "sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "oc describe bc/<name-of-your-BuildConfig>", "<https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "oc describe bc <name>", "curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "oc describe bc <name>", "curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"", "curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "oc describe bc <name>", "kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"", "type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"", "type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: \"2021-06-30T13:47:53Z\" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1", "Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.", "type: \"ConfigChange\"", "oc set triggers bc <name> --from-github", "oc set triggers bc <name> --from-image='<image>'", "oc set triggers bc <name> --from-bitbucket --remove", "oc set triggers --help", "postCommit: script: \"bundle exec rake test --verbose\"", "postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]", "postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]", "oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose", "oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\"", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2", "resources: requests: 1 cpu: \"100m\" memory: \"256Mi\"", "spec: completionDeadlineSeconds: 1800", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: nodeSelector: 1 key1: value1 key2: value2", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: \"master\" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: \".\" strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi9:latest -n openshift", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "cat << EOF > secret-template.txt kind: Secret apiVersion: v1 metadata: name: etc-pki-entitlement type: Opaque data: {{ range \\USDkey, \\USDvalue := .data }} {{ \\USDkey }}: {{ \\USDvalue }} {{ end }} EOF oc get secret etc-pki-entitlement -n openshift-config-managed -o=go-template-file --template=secret-template.txt | oc apply -f -", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3", "[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem", "oc create configmap yum-repos-d --from-file /path/to/satellite.repo", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3", "oc apply -f - <<EOF kind: SharedSecret apiVersion: sharedresource.openshift.io/v1alpha1 metadata: name: etc-pki-entitlement spec: secretRef: name: etc-pki-entitlement namespace: openshift-config-managed EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: builder-etc-pki-entitlement namespace: build-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - etc-pki-entitlement verbs: - use EOF", "oc create rolebinding builder-etc-pki-entitlement --role=builder-etc-pki-entitlement --serviceaccount=build-namespace:builder", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: uid-wrapper-rhel9 namespace: build-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: \"/etc/pki/entitlement\" name: etc-pki-entitlement source: csi: driver: csi.sharedresource.openshift.io readOnly: true 4 volumeAttributes: sharedSecret: etc-pki-entitlement 5 type: CSI", "oc start-build uid-wrapper-rhel9 -n build-namespace -F", "oc annotate clusterrolebinding.rbac system:build-strategy-docker-binding 'rbac.authorization.kubernetes.io/autoupdate=false' --overwrite", "oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated", "oc get clusterrole admin -o yaml | grep \"builds/docker\"", "oc get clusterrole edit -o yaml | grep \"builds/docker\"", "oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser", "oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject", "oc edit build.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists", "requested access to the resource is denied", "oc describe quota", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-", "oc create configmap registry-cas -n openshift-config --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-cas\"}}}' --type=merge" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/builds_using_buildconfig/index
Chapter 33. Customizing process administration
Chapter 33. Customizing process administration You can customize the default pagination option in Business Central by editing the Default items per page property on the Process Administration page. Procedure In Business Central, select the Admin icon in the top-right corner of the screen and select Process Administration . From the Properties section, update the Default items per page property and click Save . Note You can specify 10, 20, 50, or 100 items to display on each page.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/managing-business-central-process-administration-proc
function::print_ubacktrace
function::print_ubacktrace Name function::print_ubacktrace - Print stack back trace for current user-space task. Synopsis Arguments None Description Equivalent to print_ustack( ubacktrace ), except that deeper stack nesting may be supported. Returns nothing. See print_backtrace for kernel backtrace. Note To get (full) backtraces for user space applications and shared shared libraries not mentioned in the current script run stap with -d /path/to/exe-or-so and/or add --ldd to load all needed unwind data.
[ "print_ubacktrace()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-print-ubacktrace
3.8. Authentication
3.8. Authentication Identity Management component When using the Identity Management WebUI in the Internet Explorer browser, you may encounter the following issues: While the browser window is not maximized or many users are logged into the WebUI, scrolling down a page to select a user may not work properly. As soon as the user's checkbox is selected, the scroll bar jumps back up without selecting the user. This error also occurs when a permission is added to a privilege. (BZ# 831299 ) When attempting to edit a service, the edit page for that service may occasionally be blank, or show only labels for Principal or Service without showing their values. When adding a service, under certain conditions, the drop-down menu lists the available services and hosts but users are unable to select any of the entries. (BZ# 831227 ) When adding a permission of type subtree, the text area to specify the subtree is too small and non-resizable making it difficult to enter long subtree entries. (BZ# 830817 ) When adding a delegation, its attributes are separated by disproportionately large vertical spaces. (BZ# 829899 ) When adding a member, the edge of the displayed window suggests it can be resized. However, resizing of the window does not work. When adding a Sudo Command to a Sudo Command group, the first group overlays with the column title. (BZ# 829746 ) Adding a new DNS zone causes the window to be incorrectly rendered as text on the existing page. (BZ# 827583 ) Identity Management component, BZ# 826973 When Identity Management is installed with its CA certificate signed by an external CA, the installation is processed in 2 stages. In the first stage, a CSR is generated to be signed by an external CA. The second stage of the installation then accepts a file with the new signed certificate for the Identity Management CA and a certificate of the external CA. During the second stage of the installation, a signed Identity Management CA certificate subject is validated. However, there is a bug in the certificate subject validation procedure and its default value ( O=USDREALM , where USDREALM is the realm of the new Identity Management installation) is never pulled. Consequently, the second stage of the installation process always fails unless the --subject option is specified. To work around this issue, add the following option for the second stage of the installation: --subject "O=USDREALM" where USDREALM is the realm of the new Identity Management installation. If a custom subject was used for the first stage of the installation, use its value instead. Using this work around, the certificate subject validation procedure succeeds and the installation continues as expected. Identity Management component, BZ# 822350 When a user is migrated from a remote LDAP, the user's entry in the Directory Server does not contain Kerberos credentials needed for a Kerberos login. When the user visits the password migration page, Kerberos credentials are generated for the user and logging in via Kerberos authentication works as expected. However, Identity Management does not generate the credentials correctly when the migrated password does not follow the password policy set on the Identity Management server. Consequently, when the password migration is done and a user tries to log in via Kerberos authentication, the user is prompted to change the password as it does not follow the password policy, but the password change is never successful and the user is not able to use Kerberos authentication. To work around this issue, an administrator can reset the password of a migrated user with the ipa passwd command. When reset, user's Kerberos credentials in the Directory Server are properly generated and the user is able to log in using Kerberos authentication. Identity Management component In the Identity Management webUI, deleting a DNS record may, under come circumstances, leave it visible on the page showing DNS records. This is only a display issue and does not affect functionality of DNS records in any way. Identity Management component, BZ# 783502 The Identity Management permission plug-in does not verify that the set of attributes specified for a new permission is relevant to the target object type that the permission allows access to. This means a user is able to create a permission which allows access to attributes that will never be present in the target object type because such attributes are not allowed in its object classes. You must ensure that the chosen set of attributes for which a new permission grants access to is relevant to the chosen target object type. Identity Management component, BZ# 790513 The ipa-client package does not install the policycoreutils package as its dependency, which may cause install/uninstall issues when using the ipa-client-install setup script. To work around this issue, install the policycoreutils package manually: Identity Management component, BZ# 813376 Updating the Identity Management LDAP configuration via the ipa-ldap-updater fails with a traceback error when executed by a non-root user due to the SASL EXTERNAL bind requiring root privileges. To work around this issue, run the aforementioned command as the root user. Identity Management component, BZ# 794882 With netgroups, when adding a host as a member that Identity Management does not have stored as a host already, that host is considered to be an external host. This host can be controlled with netgroups, but Identity Management has no knowledge of it. Currently, there is no way to use the netgroup-find option to search for external hosts. Also, note that when a host is added to a netgroup as an external host, rather than being added in Identity Management as an external host, that host is not automatically converted within the netgroup rule. Identity Management component, BZ# 786629 Because a permission does not provide write access to an entry, delegation does not work as expected. The 389 Directory Server ( 389-ds ) distinguishes access between entries and attributes. For example, an entry can be granted add or delete access, whereas an attribute can be granted read, search, and write access. To grant write access to an entry, the list of writable attributes needs to be provided. The filter , subtree , and other options are used to target those entries which are writable. Attributes define which part(s) of those entries are writable. As a result, the list of attributes will be writable to members of the permission. sssd component, BZ# 808063 The manpage entry for the ldap_disable_paging option in the sssd-ldap man page does not indicate that it accepts the boolean values True or False, and defaulting to False if it is not explicitly specified. Identity Management component, BZ# 812127 Identity Management relies on the LDAP schema to know what type of data to expect in a given attribute. If, in certain situations (such as replication), data that does not meet those expectations is inserted into an attribute, Identity Management will not be able to handle the entry, and LDAP tools have do be used to manually clean up that entry. Identity Management component, BZ# 812122 Identity Management sudo commands are not case sensitive. For example, executing the following commands will result in the latter one failing due to the case insensitivity: Identity Management component Identity Management and the mod_ssl module should not be installed on the same system, otherwise Identity Management is unable to issue certificates because mod_ssl holds the mod_proxy hooks. To work around this issue, uninstall mod_ssl . Identity Management component When an Identity Management server is installed with a custom hostname that is not resolvable, the ipa-server-install command should add a record to the static hostname lookup table in /etc/hosts and enable further configuration of Identity Management integrated services. However, a record is not added to /etc/hosts when an IP address is passed as an CLI option and not interactively. Consequently, Identity Management installation fails because integrated services that are being configured expect the Identity Management server hostname to be resolvable. To work around this issue, complete one of the following: Run the ipa-server-install without the --ip-address option and pass the IP address interactively. Add a record to /etc/hosts before the installation is started. The record should contain the Identity Management server IP address and its full hostname (the hosts(5) man page specifies the record format). As a result, the Identity Management server can be installed with a custom hostname that is not resolvable. sssd component, BZ# 750922 Upgrading SSSD from the version provided in Red Hat Enterprise Linux 6.1 to the version shipped with Red Hat Enterprise Linux 6.2 may fail due to a bug in the dependent library libldb . This failure occurs when the SSSD cache contains internal entries whose distinguished name contains the \, character sequence. The most likely example of this is for an invalid memberUID entry to appear in an LDAP group of the form: memberUID is a multi-valued attribute and should not have multiple users in the same attribute. If the upgrade issue occurs, identifiable by the following debug log message: remove the /var/lib/sss/db/cache_<DOMAIN>.ldb file and restart SSSD. Warning Removing the /var/lib/sss/db/cache_<DOMAIN>.ldb file purges the cache of all entries (including cached credentials). sssd component, BZ# 751314 When a group contains certain incorrect multi-valued memberUID values, SSSD fails to sanitize the values properly. The memberUID value should only contain one username. As a result, SSSD creates incorrect users, using the broken memberUID values as their usernames. This, for example, causes problems during cache indexing. Identity Management component, BZ# 750596 Two Identity Management servers, both with a CA (Certificate Authority) installed, use two replication replication agreements. One is for user, group, host, and other related data. Another replication agreement is established between the CA instances installed on the servers. If the CA replication agreement is broken, the Identity Management data is still shared between the two servers, however, because there is no replication agreement between the two CAs, issuing a certificate on one server will cause the other server to not recognize that certificate, and vice versa. Identity Management component The Identity Management ( ipa ) package cannot be build with a 6ComputeNode subscription. Identity Management component On the configuration page of the Identity Management WebUI, if the User search field is left blank, and the search button is clicked, an internal error is returned. sssd component, BZ# 741264 Active Directory performs certain LDAP referral-chasing that is incompatible with the referral mechanism included in the openldap libraries. Notably, Active Directory sometimes attempts to return a referral on an LDAP bind attempt, which used to cause a hang, and is now denied by the openldap libraries. As a result, SSSD may suffer from performance issues and occasional failures resulting in missing information. To work around this issue, disable referral-chasing by setting the following parameter in the [domain/DOMAINNAME] section of the /etc/sssd/sssd.conf file:
[ "~]# yum install policycoreutils", "~]USD ipa sudocmd-add /usr/bin/X ... ~]USD ipa sudocmd-add /usr/bin/x ipa: ERROR: sudo command with name \"/usr/bin/x\" already exists", "memberUID: user1,user2", "(Wed Nov 2 15:18:21 2011) [sssd] [ldb] (0): A transaction is still active in ldb context [0xaa0460] on /var/lib/sss/db/cache_<DOMAIN>.ldb", "ldap_referrals = false" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/authentication_issues
B.10. Could not add rule to fixup DHCP response checksums on network 'default'
B.10. Could not add rule to fixup DHCP response checksums on network 'default' Symptom This message appears: Investigation Although this message appears to be evidence of an error, it is almost always harmless. Solution Unless the problem you are experiencing is that the guest virtual machines are unable to acquire IP addresses through DHCP, this message can be ignored. If this is the case, refer to Section B.8, "PXE Boot (or DHCP) on Guest Failed" for further details on this situation.
[ "Could not add rule to fixup DHCP response checksums on network 'default'" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/App_DHCP_Response_Checksums
Chapter 4. Management of monitors using the Ceph Orchestrator
Chapter 4. Management of monitors using the Ceph Orchestrator As a storage administrator, you can deploy additional monitors using placement specification, add monitors using service specification, add monitors to a subnet configuration, and add monitors to specific hosts. Apart from this, you can remove the monitors using the Ceph Orchestrator. By default, a typical Red Hat Ceph Storage cluster has three or five monitor daemons deployed on different hosts. Red Hat recommends deploying five monitors if there are five or more nodes in a cluster. Note Red Hat recommends deploying three monitors when Ceph is deployed with the OSP director. Ceph deploys monitor daemons automatically as the cluster grows, and scales back monitor daemons automatically as the cluster shrinks. The smooth execution of this automatic growing and shrinking depends upon proper subnet configuration. If your monitor nodes or your entire cluster are located on a single subnet, then Cephadm automatically adds up to five monitor daemons as you add new hosts to the cluster. Cephadm automatically configures the monitor daemons on the new hosts. The new hosts reside on the same subnet as the bootstrapped host in the storage cluster. Cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster. 4.1. Ceph Monitors Ceph Monitors are lightweight processes that maintain a master copy of the storage cluster map. All Ceph clients contact a Ceph monitor and retrieve the current copy of the storage cluster map, enabling clients to bind to a pool and read and write data. Ceph Monitors use a variation of the Paxos protocol to establish consensus about maps and other critical information across the storage cluster. Due to the nature of Paxos, Ceph requires a majority of monitors running to establish a quorum, thus establishing consensus. Important Red Hat requires at least three monitors on separate hosts to receive support for a production cluster. Red Hat recommends deploying an odd number of monitors. An odd number of Ceph Monitors has a higher resilience to failures than an even number of monitors. For example, to maintain a quorum on a two-monitor deployment, Ceph cannot tolerate any failures; with three monitors, one failure; with four monitors, one failure; with five monitors, two failures. This is why an odd number is advisable. Summarizing, Ceph needs a majority of monitors to be running and to be able to communicate with each other, two out of three, three out of four, and so on. For an initial deployment of a multi-node Ceph storage cluster, Red Hat requires three monitors, increasing the number two at a time if a valid need for more than three monitors exists. Since Ceph Monitors are lightweight, it is possible to run them on the same host as OpenStack nodes. However, Red Hat recommends running monitors on separate hosts. Important Red Hat ONLY supports collocating Ceph services in containerized environments. When you remove monitors from a storage cluster, consider that Ceph Monitors use the Paxos protocol to establish a consensus about the master storage cluster map. You must have a sufficient number of Ceph Monitors to establish a quorum. Additional Resources See the Red Hat Ceph Storage Supported configurations Knowledgebase article for all the supported Ceph configurations. 4.2. Configuring monitor election strategy The monitor election strategy identifies the net splits and handles failures. You can configure the election monitor strategy in three different modes: classic - This is the default mode in which the lowest ranked monitor is voted based on the elector module between the two sites. disallow - This mode lets you mark monitors as disallowed, in which case they will participate in the quorum and serve clients, but cannot be an elected leader. This lets you add monitors to a list of disallowed leaders. If a monitor is in the disallowed list, it will always defer to another monitor. connectivity - This mode is mainly used to resolve network discrepancies. It evaluates connection scores, based on pings that check liveness, provided by each monitor for its peers and elects the most connected and reliable monitor to be the leader. This mode is designed to handle net splits, which may happen if your cluster is stretched across multiple data centers or otherwise susceptible. This mode incorporates connection score ratings and elects the monitor with the best score. If a specific monitor is desired to be the leader, configure the election strategy so that the specific monitor is the first monitor in the list with a rank of 0 . Red Hat recommends you to stay in the classic mode unless you require features in the other modes. Before constructing the cluster, change the election_strategy to classic , disallow , or connectivity in the following command: Syntax 4.3. Deploying the Ceph monitor daemons using the command line interface The Ceph Orchestrator deploys one monitor daemon by default. You can deploy additional monitor daemons by using the placement specification in the command line interface. To deploy a different number of monitor daemons, specify a different number. If you do not specify the hosts where the monitor daemons should be deployed, the Ceph Orchestrator randomly selects the hosts and deploys the monitor daemons to them. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example There are four different ways of deploying Ceph monitor daemons: Method 1 Use placement specification to deploy monitors on hosts: Note Red Hat recommends that you use the --placement option to deploy on specific hosts. Syntax Example Note Be sure to include the bootstrap node as the first node in the command. Important Do not add the monitors individually as ceph orch apply mon supersedes and will not add the monitors to all the hosts. For example, if you run the following commands, then the first command creates a monitor on host01 . Then the second command supersedes the monitor on host1 and creates a monitor on host02 . Then the third command supersedes the monitor on host02 and creates a monitor on host03 . Eventually, there is a monitor only on the third host. Method 2 Use placement specification to deploy specific number of monitors on specific hosts with labels: Add the labels to the hosts: Syntax Example Deploy the daemons: Syntax Example Method 3 Use placement specification to deploy specific number of monitors on specific hosts: Syntax Example Method 4 Deploy monitor daemons randomly on the hosts in the storage cluster: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 4.4. Deploying the Ceph monitor daemons using the service specification The Ceph Orchestrator deploys one monitor daemon by default. You can deploy additional monitor daemons by using the service specification, like a YAML format file. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Create the mon.yaml file: Example Edit the mon.yaml file to include the following details: Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Deploy the monitor daemons: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 4.5. Deploying the monitor daemons on specific network using the Ceph Orchestrator The Ceph Orchestrator deploys one monitor daemon by default. You can explicitly specify the IP address or CIDR network for each monitor and control where each monitor is placed. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example Disable automated monitor deployment: Example Deploy monitors on hosts on specific network: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 4.6. Removing the monitor daemons using the Ceph Orchestrator To remove the monitor daemons from the host, you can just redeploy the monitor daemons on other hosts. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. At least one monitor daemon deployed on the hosts. Procedure Log into the Cephadm shell: Example Run the ceph orch apply command to deploy the required monitor daemons: Syntax If you want to remove monitor daemons from host02 , then you can redeploy the monitors on other hosts. Example Verification List the hosts,daemons, and processes: Syntax Example Additional Resources See Deploying the Ceph monitor daemons using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the Ceph monitor daemons using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. 4.7. Removing a Ceph Monitor from an unhealthy storage cluster You can remove a ceph-mon daemon from an unhealthy storage cluster. An unhealthy storage cluster is one that has placement groups persistently in not active + clean state. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. At least one running Ceph Monitor node. Procedure Identify a surviving monitor and log into the host: Syntax Example Log in to each Ceph Monitor host and stop all the Ceph Monitors: Syntax Example Set up the environment suitable for extended daemon maintenance and to run the daemon interactively: Syntax Example Extract a copy of the monmap file: Syntax Example Remove the non-surviving Ceph Monitor(s): Syntax Example Inject the surviving monitor map with the removed monitor(s) into the surviving Ceph Monitor: Syntax Example Start only the surviving monitors: Syntax Example Verify the monitors form a quorum: Example Optional: Archive the removed Ceph Monitor's data directory in /var/lib/ceph/ CLUSTER_FSID /mon. HOSTNAME directory.
[ "ceph mon set election_strategy {classic|disallow|connectivity}", "cephadm shell", "ceph orch apply mon --placement=\" HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mon --placement=\"host01 host02 host03\"", "ceph orch apply mon host01 ceph orch apply mon host02 ceph orch apply mon host03", "ceph orch host label add HOSTNAME_1 LABEL", "ceph orch host label add host01 mon", "ceph orch apply mon --placement=\" HOST_NAME_1 :mon HOST_NAME_2 :mon HOST_NAME_3 :mon\"", "ceph orch apply mon --placement=\"host01:mon host02:mon host03:mon\"", "ceph orch apply mon --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mon --placement=\"3 host01 host02 host03\"", "ceph orch apply mon NUMBER_OF_DAEMONS", "ceph orch apply mon 3", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "touch mon.yaml", "service_type: mon placement: hosts: - HOST_NAME_1 - HOST_NAME_2", "service_type: mon placement: hosts: - host01 - host02", "cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml", "cd /var/lib/ceph/mon/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mon.yaml", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "cephadm shell", "ceph orch apply mon --unmanaged", "ceph orch daemon add mon HOST_NAME_1 : IP_OR_NETWORK", "ceph orch daemon add mon host03:10.1.2.123", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "cephadm shell", "ceph orch apply mon \" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_3 \"", "ceph orch apply mon \"2 host01 host03\"", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "ssh root@ MONITOR_ID", "ssh root@host00", "cephadm unit --name DAEMON_NAME . HOSTNAME stop", "cephadm unit --name mon.host00 stop", "cephadm shell --name DAEMON_NAME . HOSTNAME", "cephadm shell --name mon.host00", "ceph-mon -i HOSTNAME --extract-monmap TEMP_PATH", "ceph-mon -i host01 --extract-monmap /tmp/monmap 2022-01-05T11:13:24.440+0000 7f7603bd1700 -1 wrote monmap to /tmp/monmap", "monmaptool TEMPORARY_PATH --rm HOSTNAME", "monmaptool /tmp/monmap --rm host01", "ceph-mon -i HOSTNAME --inject-monmap TEMP_PATH", "ceph-mon -i host00 --inject-monmap /tmp/monmap", "cephadm unit --name DAEMON_NAME . HOSTNAME start", "cephadm unit --name mon.host00 start", "ceph -s" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/operations_guide/management-of-monitors-using-the-ceph-orchestrator
Chapter 16. File and Print Servers
Chapter 16. File and Print Servers This chapter guides you through the installation and configuration of Samba , an open source implementation of the Server Message Block ( SMB ) and common Internet file system ( CIFS ) protocol, and vsftpd , the primary FTP server shipped with Red Hat Enterprise Linux. Additionally, it explains how to use the Print Settings tool to configure printers. 16.1. Samba Samba implements the Server Message Block (SMB) protocol in Red Hat Enterprise Linux. The SMB protocol is used to access resources on a server, such as file shares and shared printers. Additionally, Samba implements the Distributed Computing Environment Remote Procedure Call (DCE RPC) protocol used by Microsoft Windows. You can run Samba as: An Active Directory (AD) or NT4 domain member A standalone server An NT4 Primary Domain Controller (PDC) or Backup Domain Controller (BDC) Note Red Hat supports these modes only in existing installations with Windows versions which support NT4 domains. Red Hat recommends not setting up a new Samba NT4 domain, because Microsoft operating systems later than Windows 7 and Windows Server 2008 R2 do not support NT4 domains. Independently of the installation mode, you can optionally share directories and printers. This enables Samba to act as a file and print server. Note Red Hat does not support running Samba as an AD domain controller (DC). 16.1.1. The Samba Services Samba provides the following services: smbd This service provides file sharing and printing services using the SMB protocol. Additionally, the service is responsible for resource locking and for authenticating connecting users. The smb systemd service starts and stops the smbd daemon. To use the smbd service, install the samba package. nmbd This service provides host name and IP resolution using the NetBIOS over IPv4 protocol. Additionally to the name resolution, the nmbd service enables browsing the SMB network to locate domains, work groups, hosts, file shares, and printers. For this, the service either reports this information directly to the broadcasting client or forwards it to a local or master browser. The nmb systemd service starts and stops the nmbd daemon. Note that modern SMB networks use DNS to resolve clients and IP addresses. To use the nmbd service, install the samba package. winbindd The winbindd service provides an interface for the Name Service Switch (NSS) to use AD or NT4 domain users and groups on the local system. This enables, for example, domain users to authenticate to services hosted on a Samba server or to other local services. The winbind systemd service starts and stops the winbindd daemon. If you set up Samba as a domain member, winbindd must be started before the smbd service. Otherwise, domain users and groups are not available to the local system. To use the winbindd service, install the samba-winbind package. Important Red Hat only supports running Samba as a server with the winbindd service to provide domain users and groups to the local system. Due to certain limitations, such as missing Windows access control list (ACL) support and NT LAN Manager (NTLM) fallback, use of the System Security Services Daemon (SSSD) with Samba is currently not supported for these use cases. For further details, see the Red Hat Knowledgebase article What is the support status for Samba file server running on IdM clients or directly enrolled AD clients where SSSD is used as the client daemon . 16.1.2. Verifying the smb.conf File by Using the testparm Utility The testparm utility verifies that the Samba configuration in the /etc/samba/smb.conf file is correct. The utility detects invalid parameters and values, but also incorrect settings, such as for ID mapping. If testparm reports no problem, the Samba services will successfully load the /etc/samba/smb.conf file. Note that testparm cannot verify that the configured services will be available or work as expected. Important Red Hat recommends that you verify the /etc/samba/smb.conf file by using testparm after each modification of this file. To verify the /etc/samba/smb.conf file, run the testparm utility as the root user. If testparm reports incorrect parameters, values, or other errors in the configuration, fix the problem and run the utility again. Example 16.1. Using testparm The following output reports a non-existent parameter and an incorrect ID mapping configuration: 16.1.3. Understanding the Samba Security Modes The security parameter in the [global] section in the /etc/samba/smb.conf file manages how Samba authenticates users that are connecting to the service. Depending on the mode you install Samba in, the parameter must be set to different values: On an AD domain member, set security = ads . In this mode, Samba uses Kerberos to authenticate AD users. For details about setting up Samba as a domain member, see Section 16.1.5, "Setting up Samba as a Domain Member" . On a standalone server, set security = user . In this mode, Samba uses a local database to authenticate connecting users. For details about setting up Samba as a standalone server, see Section 16.1.4, "Setting up Samba as a Standalone Server" . On an NT4 PDC or BDC, set security = user . In this mode, Samba authenticates users to a local or LDAP database. On an NT4 domain member, set security = domain . In this mode, Samba authenticates connecting users to an NT4 PDC or BDC. You cannot use this mode on AD domain members. For details about setting up Samba as a domain member, see Section 16.1.5, "Setting up Samba as a Domain Member" . For further details, see the description of the security parameter in the smb.conf (5) man page. 16.1.4. Setting up Samba as a Standalone Server In certain situations, administrators want to set up a Samba server that is not a member of a domain. In this installation mode, Samba authenticates users to a local database instead of to a central DC. Additionally, you can enable guest access to allow users to connect to one or multiple services without authentication. 16.1.4.1. Setting up the Server Configuration for the Standalone Server To set up Samba as a standalone server: Setting up Samba as a Standalone Server Install the samba package: Edit the /etc/samba/smb.conf file and set the following parameters: This configuration defines a standalone server named Server within the Example-WG work group. Additionally, this configuration enables logging on a minimal level ( 1 ) and log files will be stored in the /var/log/samba/ directory. Samba will expand the %m macro in the log file parameter to the NetBIOS name of connecting clients. This enables individual log files for each client. For further details, see the parameter descriptions in the smb.conf (5) man page. Configure file or printer sharing. See: Section 16.1.6, "Configuring File Shares on a Samba Server" Section 16.1.7, "Setting up a Samba Print Server" Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . If you set up shares that require authentication, create the user accounts. For details, see Section 16.1.4.2, "Creating and Enabling Local User Accounts" . Open the required ports and reload the firewall configuration by using the firewall-cmd utility: Start the smb service: Optionally, enable the smb service to start automatically when the system boots: 16.1.4.2. Creating and Enabling Local User Accounts To enable users to authenticate when they connect to a share, you must create the accounts on the Samba host both in the operating system and in the Samba database. Samba requires the operating system account to validate the Access Control Lists (ACL) on file system objects and the Samba account to authenticate connecting users. If you use the passdb backend = tdbsam default setting, Samba stores user accounts in the /var/lib/samba/private/passdb.tdb database. For example, to create the example Samba user: Creating a Samba User Create the operating system account: The command adds the example account without creating a home directory. If the account is only used to authenticate to Samba, assign the /sbin/nologin command as shell to prevent the account from logging in locally. Set a password to the operating system account to enable it: Samba does not use the password set on the operating system account to authenticate. However, you need to set a password to enable the account. If an account is disabled, Samba denies access if this user connects. Add the user to the Samba database and set a password to the account: Use this password to authenticate when using this account to connect to a Samba share. Enable the Samba account: 16.1.5. Setting up Samba as a Domain Member Administrators running an AD or NT4 domain often want to use Samba to join their Red Hat Enterprise Linux server as a member to the domain. This enables you to: Access domain resources on other domain members Authenticate domain users to local services, such as sshd Share directories and printers hosted on the server to act as a file and print server 16.1.5.1. Joining a Domain To join a Red Hat Enterprise Linux system to a domain: Joining a Red Hat Enterprise Linux System to a Domain Install the following packages: To share directories or printers on the domain member, install the samba package: If you join an AD, additionally install the samba-winbind-krb5-locator package: This plug-in enables Kerberos to locate the Key Distribution Center (KDC) based on AD sites using DNS service records. Optionally, rename the existing /etc/samba/smb.conf Samba configuration file: Join the domain. For example, to join a domain named ad.example.com Using the command, the realm utility automatically: Creates a /etc/samba/smb.conf file for a membership in the ad.example.com domain Adds the winbind module for user and group lookups to the /etc/nsswitch.conf file Updates the Pluggable Authentication Module (PAM) configuration files in the /etc/pam.d/ directory Starts the winbind service and enables the service to start when the system boots For further details about the realm utility, see the realm (8) man page and the corresponding section in the Red Hat Windows Integration Guide . Optionally, set an alternative ID mapping back end or customized ID mapping settings in the /etc/samba/smb.conf file. For details, see Section 16.1.5.3, "Understanding ID Mapping" . Optionally, verify the configuration. See Section 16.1.5.2, "Verifying That Samba Was Correctly Joined As a Domain Member" . Verify that the winbindd is running: Important To enable Samba to query domain user and group information, the winbindd service must be running before you start smbd . If you installed the samba package to share directories and printers, start the smbd service: 16.1.5.2. Verifying That Samba Was Correctly Joined As a Domain Member After you joined a Red Hat Enterprise Linux as a domain member, you can run different tests to verify that the join succeeded. See: the section called "Verifying That the Operating System Can Retrieve Domain User Accounts and Groups" the section called "Verifying If AD Domain Users Can Obtain a Kerberos Ticket" the section called "Listing the Available Domains" Verifying That the Operating System Can Retrieve Domain User Accounts and Groups Use the getent utility to verify that the operating system can retrieve domain users and groups. For example: To query the administrator account in the AD domain: To query the members of the Domain Users group in the AD domain: If the command works correctly, verify that you can use domain users and groups when you set permissions on files and directories. For example, to set the owner of the /srv/samba/example.txt file to AD\administrator and the group to AD\Domain Users : Verifying If AD Domain Users Can Obtain a Kerberos Ticket In an AD environment, users can obtain a Kerberos ticket from the DC. For example, to verify if the administrator user can obtain a Kerberos ticket: Note To use the kinit and klist utilities, install the krb5-workstation package on the Samba domain member. Obtaining a Kerberos Ticket Obtain a ticket for the [email protected] principal: Display the cached Kerberos ticket: Listing the Available Domains To list all domains available through the winbindd service, enter: If Samba was successfully joined as a domain member, the command displays the built-in and local host name, as well as the domain Samba is a member of including trusted domains. Example 16.2. Displaying the Available Domains 16.1.5.3. Understanding ID Mapping Windows domains distinguish users and groups by unique Security Identifiers (SID). However, Linux requires unique UIDs and GIDs for each user and group. If you run Samba as a domain member, the winbindd service is responsible for providing information about domain users and groups to the operating system. To enable the winbindd service to provide unique IDs for users and groups to Linux, you must configure ID mapping in the /etc/samba/smb.conf file for: The local database (default domain) The AD or NT4 domain the Samba server is a member of Each trusted domain from which users must be able to access resources on this Samba server 16.1.5.3.1. Planning ID Ranges Regardless of whether you store the Linux UIDs and GIDs in AD or if you configure Samba to generate them, each domain configuration requires a unique ID range that must not overlap with any of the other domains. Warning If you set overlapping ID ranges, Samba fails to work correctly. Example 16.3. Unique ID Ranges The following shows non-overlapping ID mapping ranges for the default ( * ), AD-DOM , and the TRUST-DOM domains. Important You can only assign one range per domain. Therefore, leave enough space between the domains ranges. This enables you to extend the range later if your domain grows. If you later assign a different range to a domain, the ownership of files and directories previously created by these users and groups will be lost. 16.1.5.3.2. The * Default Domain In a domain environment, you add one ID mapping configuration for each of the following: The domain the Samba server is a member of Each trusted domain that should be able to access the Samba server However, for all other objects, Samba assigns IDs from the default domain. This includes: Local Samba users and groups Samba built-in accounts and groups, such as BUILTIN\Administrators Important You must configure the default domain as described in this section to enable Samba to operate correctly. The default domain back end must be writable to permanently store the assigned IDs. For the default domain, you can use one of the following back ends: tdb When you configure the default domain to use the tdb back end, set an ID range that is big enough to include objects that will be created in the future and that are not part of a defined domain ID mapping configuration. For example, set the following in the [global] section in the /etc/samba/smb.conf file: For further details, see Section 16.1.5.4.1, "Using the tdb ID Mapping Back End" . autorid When you configure the default domain to use the autorid back end, adding additional ID mapping configurations for domains is optional. For example, set the following in the [global] section in the /etc/samba/smb.conf file: For further details, see Configuring the autorid Back End . 16.1.5.4. The Different ID Mapping Back Ends Samba provides different ID mapping back ends for specific configurations. The most frequently used back ends are: Table 16.1. Frequently Used ID Mapping Back Ends Back End Use Case tdb The * default domain only ad AD domains only rid AD and NT4 domains autorid AD, NT4, and the * default domain The following sections describe the benefits, recommended scenarios where to use the back end, and how to configure it. 16.1.5.4.1. Using the tdb ID Mapping Back End The winbindd service uses the writable tdb ID mapping back end by default to store Security Identifier (SID), UID, and GID mapping tables. This includes local users, groups, and built-in principals. Use this back end only for the * default domain. For example: For further details about the * default domain, see Section 16.1.5.3.2, "The * Default Domain" . 16.1.5.4.2. Using the ad ID Mapping Back End The ad ID mapping back end implements a read-only API to read account and group information from AD. This provides the following benefits: All user and group settings are stored centrally in AD. User and group IDs are consistent on all Samba servers that use this back end. The IDs are not stored in a local database which can corrupt, and therefore file ownerships cannot be lost. The ad back end reads the following attributes from AD: Table 16.2. Attributes the ad Back End Reads from User and Group Objects AD Attribute Name Object Type Mapped to sAMAccountName User and group User or group name, depending on the object uidNumber User User ID (UID) gidNumber Group Group ID (GID) loginShell [a] User Path to the shell of the user unixHomeDirectory User Path to the home directory of the user primaryGroupID [b] User Primary group ID [a] Samba only reads this attribute if you set idmap config DOMAIN :unix_nss_info = yes . [b] Samba only reads this attribute if you set idmap config DOMAIN :unix_primary_group = yes . Prerequisites of the ad Back End To use the ad ID mapping back end: Both users and groups must have unique IDs set in AD, and the IDs must be within the range configured in the /etc/samba/smb.conf file. Objects whose IDs are outside of the range will not be available on the Samba server. Users and groups must have all required attributes set in AD. If required attributes are missing, the user or group will not be available on the Samba server. The required attributes depend on your configuration. See Table 16.2, "Attributes the ad Back End Reads from User and Group Objects" . Configuring the ad Back End To configure a Samba AD member to use the ad ID mapping back end: Configuring the ad Back End on a Domain Member Edit the [global] section in the /etc/samba/smb.conf file: Add an ID mapping configuration for the default domain ( * ) if it does not exist. For example: For further details about the default domain configuration, see Section 16.1.5.3.2, "The * Default Domain" . Enable the ad ID mapping back end for the AD domain: Set the range of IDs that is assigned to users and groups in the AD domain. For example: Important The range must not overlap with any other domain configuration on this server. Additionally, the range must be set big enough to include all IDs assigned in the future. For further details, see Section 16.1.5.3.1, "Planning ID Ranges" . Set that Samba uses the RFC 2307 schema when reading attributes from AD: To enable Samba to read the login shell and the path to the users home directory from the corresponding AD attribute, set: Alternatively, you can set a uniform domain-wide home directory path and login shell that is applied to all users. For example: For details about variable substitution, see the VARIABLE SUBSTITUTIONS section in the smb.conf (5) man page. By default, Samba uses the primaryGroupID attribute of a user object as the user's primary group on Linux. Alternatively, you can configure Samba to use the value set in the gidNumber attribute instead: Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: Verify that the settings work as expected. See the section called "Verifying That the Operating System Can Retrieve Domain User Accounts and Groups" . For further details, see the smb.conf (5) and idmap_ad (8) man pages. 16.1.5.4.3. Using the rid ID Mapping Back End Samba can use the relative identifier (RID) of a Windows SID to generate an ID on Red Hat Enterprise Linux. Note The RID is the last part of a SID. For example, if the SID of a user is S-1-5-21-5421822485-1151247151-421485315-30014 , then 30014 is the corresponding RID. For details, how Samba calculates the local ID, see the idmap_rid (8) man page. The rid ID mapping back end implements a read-only API to calculate account and group information based on an algorithmic mapping scheme for AD and NT4 domains. When you configure the back end, you must set the lowest and highest RID in the idmap config DOMAIN : range parameter. Samba will not map users or groups with a lower or higher RID than set in this parameter. Important As a read-only back end, rid cannot assign new IDs, such as for BUILTIN groups. Therefore, do not use this back end for the * default domain. Benefits All domain users and groups that have an RID within the configured range are automatically available on the domain member. You do not need to manually assign IDs, home directories, and login shells. Drawbacks All domain users get the same login shell and home directory assigned. However, you can use variables. User and group IDs are only the same across Samba domain members if all use the rid back end with the same ID range settings. You cannot exclude individual users or groups from being available on the domain member. Only users and groups outside of the configured range are excluded. Based on the formulas the winbindd service uses to calculate the IDs, duplicate IDs can occur in multi-domain environments if objects in different domains have the same RID. Configuring the rid Back End To configure a Samba domain member to use the rid ID mapping back end: Configuring the rid Back End on a Domain Member Edit the [global] section in the /etc/samba/smb.conf file: Add an ID mapping configuration for the default domain ( * ) if it does not exist. For example: For further details about the default domain configuration, see Section 16.1.5.3.2, "The * Default Domain" . Enable the rid ID mapping back end for the domain: Set a range that is big enough to include all RIDs that will be assigned in the future. For example: Samba ignores users and groups whose RIDs in this domain are not within the range. Important The range must not overlap with any other domain configuration on this server. For further details, see Section 16.1.5.3.1, "Planning ID Ranges" . Set a shell and home directory path that will be assigned to all mapped users. For example: For details about variable substitution, see the VARIABLE SUBSTITUTIONS section in the smb.conf (5) man page. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: Verify that the settings work as expected. See the section called "Verifying That the Operating System Can Retrieve Domain User Accounts and Groups" . 16.1.5.4.4. Using the autorid ID Mapping Back End The autorid back end works similar to the rid ID mapping back end, but can automatically assign IDs for different domains. This enables you to use the autorid back end in the following situations: Only for the * default domain. For the * default domain and additional domains, without the need to create ID mapping configurations for each of the additional domains. Only for specific domains. Benefits All domain users and groups whose calculated UID and GID is within the configured range are automatically available on the domain member. You do not need to manually assign IDs, home directories, and login shells. No duplicate IDs, even if multiple objects in a multi-domain environment have the same RID. Drawbacks User and group IDs are not the same across Samba domain members. All domain users get the same login shell and home directory assigned. However, you can use variables. You cannot exclude individual users or groups from being available on the domain member. Only users and groups whose calculated UID or GID is outside of the configured range are excluded. Configuring the autorid Back End To configure a Samba domain member to use the autorid ID mapping back end for the * default domain: Note If you use autorid for the default domain, adding additional ID mapping configuration for domains is optional. Configuring the autorid Back End on a Domain Member Edit the [global] section in the /etc/samba/smb.conf file: Enable the autorid ID mapping back end for the * default domain: Set a range that is big enough to assign IDs for all existing and future objects. For example: Samba ignores users and groups whose calculated IDs in this domain are not within the range. For details about how the back end calculated IDs, see the THE MAPPING FORMULAS section in the idmap_autorid (8) man page. Warning After you set the range and Samba starts using it, you can only increase the upper limit of the range. Any other change to the range can result in new ID assignments, and thus in loosing file ownerships. Optionally, set a range size. For example: Samba assigns this number of continuous IDs for each domain's object until all IDs from the range set in the idmap config * : range parameter are taken. For further details, see the rangesize parameter description in the idmap_autorid (8) man page. Set a shell and home directory path that will be assigned to all mapped users. For example: For details about variable substitution, see the VARIABLE SUBSTITUTIONS section in the smb.conf (5) man page. Optionally, add additional ID mapping configuration for domains. If no configuration for an individual domain is available, Samba calculates the ID using the autorid back end settings in the previously configured * default domain. Important If you configure additional back ends for individual domains, the ranges for all ID mapping configuration must not overlap. For further details, see Section 16.1.5.3.1, "Planning ID Ranges" . Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: Verify that the settings work as expected. See the section called "Verifying That the Operating System Can Retrieve Domain User Accounts and Groups" . 16.1.6. Configuring File Shares on a Samba Server To use Samba as a file server, add shares to the /etc/samba/smb.conf file of your standalone or domain member configuration. You can add shares that uses either: POSIX ACLs. See Section 16.1.6.1, "Setting up a Share That Uses POSIX ACLs" . Fine-granular Windows ACLs. See Section 16.1.6.2, "Setting up a Share That Uses Windows ACLs" . 16.1.6.1. Setting up a Share That Uses POSIX ACLs As a Linux service, Samba supports shares with POSIX ACLs. They enable you to manage permissions locally on the Samba server using utilities, such as chmod . If the share is stored on a file system that supports extended attributes, you can define ACLs with multiple users and groups. Note If you need to use fine-granular Windows ACLs instead, see Section 16.1.6.2, "Setting up a Share That Uses Windows ACLs" . Before you can add a share, set up Samba. See: Section 16.1.4, "Setting up Samba as a Standalone Server" Section 16.1.5, "Setting up Samba as a Domain Member" 16.1.6.1.1. Adding a Share That Uses POSIX ACLs To create a share named example , that provides the content of the /srv/samba/example/ directory, and uses POSIX ACLs: Adding a Share That Uses POSIX ACLs Optionally, create the folder if it does not exist. For example: If you run SELinux in enforcing mode, set the samba_share_t context on the directory: Set file system ACLs on the directory. For details, see Section 16.1.6.1.2, "Setting ACLs" . Add the example share to the /etc/samba/smb.conf file. For example, to add the share write-enabled: Note Regardless of the file system ACLs; if you do not set read only = no , Samba shares the directory in read-only mode. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Open the required ports and reload the firewall configuration using the firewall-cmd utility: Restart the smb service: Optionally, enable the smb service to start automatically at boot time: 16.1.6.1.2. Setting ACLs Shares that use POSIX ACLs support: Standard Linux ACLs. For details, see Setting Standard Linux ACLs . Extended ACLs. For details, see Setting Extended ACLs . Setting Standard Linux ACLs The standard ACLs on Linux support setting permissions for one owner, one group, and for all other undefined users. You can use the chown , chgrp , and chmod utility to update the ACLs. If you require precise control, then you use the more complex POSIX ACLs, see Setting Extended ACLs . For example, to set the owner of the /srv/samba/example/ directory to the root user, grant read and write permissions to the Domain Users group, and deny access to all other users: Note Enabling the set-group-ID (SGID) bit on a directory automatically sets the default group for all new files and subdirectories to that of the directory group, instead of the usual behavior of setting it to the primary group of the user who created the new directory entry. For further details about permissions, see the chown (1) and chmod (1) man pages. Setting Extended ACLs If the file system the shared directory is stored on supports extended ACLs, you can use them to set complex permissions. Extended ACLs can contain permissions for multiple users and groups. Extended POSIX ACLs enable you to configure complex ACLs with multiple users and groups. However, you can only set the following permissions: No access Read access Write access Full control If you require the fine-granular Windows permissions, such as Create folder / append data , configure the share to use Windows ACLs. See Section 16.1.6.2, "Setting up a Share That Uses Windows ACLs" . To use extended POSIX ACLs on a share: Enabling Extended POSIX ACLs on a Share Enable the following parameter in the share's section in the /etc/samba/smb.conf file to enable ACL inheritance of extended ACLs: For details, see the parameter description in the smb.conf (5) man page. Restart the smb service: Optionally, enable the smb service to start automatically at boot time: Set the ACLs on the directory. For details about using extended ACLs, see Chapter 5, Access Control Lists . Example 16.4. Setting Extended ACLs The following procedure sets read, write, and execute permissions for the Domain Admins group, read, and execute permissions for the Domain Users group, and deny access to everyone else on the /srv/samba/example/ directory: Setting Extended ACLs Disable auto-granting permissions to the primary group of user accounts: The primary group of the directory is additionally mapped to the dynamic CREATOR GROUP principal. When you use extended POSIX ACLs on a Samba share, this principal is automatically added and you cannot remove it. Set the permissions on the directory: Grant read, write, and execute permissions to the Domain Admins group: Grant read and execute permissions to the Domain Users group: Set permissions for the other ACL entry to deny access to users that do not match the other ACL entries: These settings apply only to this directory. In Windows, these ACLs are mapped to the This folder only mode. To enable the permissions set in the step to be inherited by new file system objects created in this directory: With these settings, the This folder only mode for the principals is now set to This folder, subfolders, and files . Samba maps the previously set permissions to the following Windows ACLs: Principal Access Applies to DOMAIN \Domain Admins Full control This folder, subfolders, and files DOMAIN \Domain Users Read & execute This folder, subfolders, and files Everyone [a] None This folder, subfolders, and files owner (Unix Userpass:attributes[] owner ) [b] Full control This folder only primary_group (Unix Userpass:attributes[] primary_group ) [c] None This folder only CREATOR OWNER [d] [e] Full control Subfolders and files only CREATOR GROUP [f] None Subfolders and files only [a] Samba maps the permissions for this principal from the other ACL entry. [b] Samba maps the owner of the directory to this entry. [c] Samba maps the primary group of the directory to this entry. [d] On new file system objects, the creator inherits automatically the permissions of this principal. [e] Configuring or removing these principals from the ACLs not supported on shares that use POSIX ACLs. [f] On new file system objects, the creator's primary group inherits automatically the permissions of this principal. 16.1.6.1.3. Setting Permissions on a Share Optionally, to limit or grant access to a Samba share, you can set certain parameters in the share's section in the /etc/samba/smb.conf file. Note Share-based permissions manage if a user, group, or host is able to access a share. These settings do not affect file system ACLs. Use share-based settings to restrict access to shares. For example, to deny access from specific hosts. Configuring User and Group-based Share Access User and group-based access control enables you to grant or deny access to a share for certain users and groups. For example, to enable all members of the Domain Users group to access a share while access is denied for the user account, add the following parameters to the share's configuration: The invalid users parameter has a higher priority than valid users parameter. For example, if the user account is a member of the Domain Users group, access is denied to this account when you use the example. For further details, see the parameter descriptions in the smb.conf (5) man page. Configuring Host-based Share Access Host-based access control enables you to grant or deny access to a share based on client's host names, IP addresses, or IP ranges. For example, to enable the 127.0.0.1 IP address, the 192.0.2.0/24 IP range, and the client1.example.com host to access a share, and additionally deny access for the client2.example.com host: Configuring Host-based Share Access Add the following parameters to the configuration of the share in the /etc/samba/smb.conf : Reload the Samba configuration The hosts deny parameter has a higher priority than hosts allow . For example, if client1.example.com resolves to an IP address that is listed in the hosts allow parameter, access for this host is denied. For further details, see the parameter description in the smb.conf (5) man page. 16.1.6.2. Setting up a Share That Uses Windows ACLs Samba supports setting Windows ACLs on shares and file system object. This enables you to: Use the fine-granular Windows ACLs Manage share permissions and file system ACLs using Windows Alternatively, you can configure a share to use POSIX ACLs. For details, see Section 16.1.6.1, "Setting up a Share That Uses POSIX ACLs" . 16.1.6.2.1. Granting the SeDiskOperatorPrivilege Privilege Only users and groups having the SeDiskOperatorPrivilege privilege granted can configure permissions on shares that use Windows ACLs. For example, to grant the privilege to the DOMAIN \Domain Admins group: Note In a domain environment, grant SeDiskOperatorPrivilege to a domain group. This enables you to centrally manage the privilege by updating a user's group membership. To list all users and groups having SeDiskOperatorPrivilege granted: 16.1.6.2.2. Enabling Windows ACL Support To configure shares that support Windows ACLs, you must enable this feature in Samba. To enable it globally for all shares, add the following settings to the [global] section of the /etc/samba/smb.conf file: Alternatively, you can enable Windows ACL support for individual shares, by adding the same parameters to a share's section instead. 16.1.6.2.3. Adding a Share That Uses Windows ACLs To create a share named example , that shares the content of the /srv/samba/example/ directory, and uses Windows ACLs: Adding a Share That Uses Windows ACLs Optionally, create the folder if it does not exists. For example: If you run SELinux in enforcing mode, set the samba_share_t context on the directory: Add the example share to the /etc/samba/smb.conf file. For example, to add the share write-enabled: Note Regardless of the file system ACLs; if you do not set read only = no , Samba shares the directory in read-only mode. If you have not enabled Windows ACL support in the [global] section for all shares, add the following parameters to the [example] section to enable this feature for this share: Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Open the required ports and reload the firewall configuration using the firewall-cmd utility: Restart the smb service: Optionally, enable the smb service to start automatically at boot time: 16.1.6.2.4. Managing Share Permissions and File System ACLs of a Share That Uses Windows ACLs To manage share and file system ACLs on a Samba share that uses Windows ACLs, use a Windows applications, such as Computer Management . For details, see your Windows documentation. Alternatively, use the smbcacls utility to manage ACLs. For details, see Section 16.1.6.3, "Managing ACLs on an SMB Share Using smbcacls " . Note To modify the file system permissions from Windows, you must use an account that has the SeDiskOperatorPrivilege privilege granted. See Section 16.1.6.2.1, "Granting the SeDiskOperatorPrivilege Privilege" . 16.1.6.3. Managing ACLs on an SMB Share Using smbcacls The smbcacls utility can list, set, and delete ACLs of files and directories stored on an SMB share. You can use smbcacls to manage file system ACLs: On a local or remote Samba server that uses advanced Windows ACLs or POSIX ACLs. On Red Hat Enterprise Linux to remotely manage ACLs on a share hosted on Windows. 16.1.6.3.1. Understanding Access Control Entries Each ACL entry of a file system object contains Access Control Entries (ACE) in the following format: Example 16.5. Access Control Entries If the AD\Domain Users group has Modify permissions that apply to This folder, subfolders, and files on Windows, the ACL contains the following ACEs: The following describes the individual ACEs: Security principal The security principal is the user, group, or SID the permissions in the ACL are applied to. Access right Defines if access to an object is granted or denied. The value can be ALLOWED or DENIED . Inheritance information The following values exist: Table 16.3. Inheritance Settings Value Description Maps to OI Object Inherit This folder and files CI Container Inherit This folder and subfolders IO Inherit Only The ACE does not apply to the current file or directory. ID Inherited The ACE was inherited from the parent directory. Additionally, the values can be combined as follows: Table 16.4. Inheritance Settings Combinations Value Combinations Maps to the Windows Applies to Setting OI/CI This folder, subfolders, and files OI/CI/IO Subfolders and files only CI/IO Subfolders only OI/IO Files only Permissions This value can be either a hex value that represents one or more Windows permissions or an smbcacls alias: A hex value that represents one or more Windows permissions. The following table displays the advanced Windows permissions and their corresponding value in hex format: Table 16.5. Windows Permissions and Their Corresponding smbcacls Value in Hex Format Windows Permissions Hex Values Full control 0x001F01FF Traverse folder / execute file 0x00100020 List folder / read data 0x00100001 Read attributes 0x00100080 Read extended attributes 0x00100008 Create files / write data 0x00100002 Create folders / append data 0x00100004 Write attributes 0x00100100 Write extended attributes 0x00100010 Delete subfolders and files 0x00100040 Delete 0x00110000 Read permissions 0x00120000 Change permissions 0x00140000 Take ownership 0x00180000 Multiple permissions can be combined as a single hex value using the bit-wise OR operation. For details, see Section 16.1.6.3.3, "Calculating an ACE Mask" . An smbcacls alias. The following table displays the available aliases: Table 16.6. Existing smbcacls Aliases and Their Corresponding Windows Permission smbcacls Alias Maps to Windows Permission R Read READ Read & execute W Special Create files / write data Create folders / append data Write attributes Write extended attributes Read permissions D Delete P Change permissions O Take ownership X Traverse / execute CHANGE Modify FULL Full control Note You can combine single-letter aliases when you set permissions. For example, you can set RD to apply the Windows permission Read and Delete . However, you can neither combine multiple non-single-letter aliases nor combine aliases and hex values. 16.1.6.3.2. Displaying ACLs Using smbcacls If you run smbcacls without any operation parameter, such as --add , the utility displays the ACLs of a file system object. For example, to list the ACLs of the root directory of the //server/example share: The output of the command displays: REVISION : The internal Windows NT ACL revision of the security descriptor CONTROL : Security descriptor control OWNER : Name or SID of the security descriptor's owner GROUP : Name or SID of the security descriptor's group ACL entries. For details, see Section 16.1.6.3.1, "Understanding Access Control Entries" . 16.1.6.3.3. Calculating an ACE Mask In most situations, when you add or update an ACE, you use the smbcacls aliases listed in Table 16.6, "Existing smbcacls Aliases and Their Corresponding Windows Permission" . However, if you want to set advanced Windows permissions as listed in Table 16.5, "Windows Permissions and Their Corresponding smbcacls Value in Hex Format" , you must use the bit-wise OR operation to calculate the correct value. You can use the following shell command to calculate the value: Example 16.6. Calculating an ACE Mask You want set following permissions: Traverse folder / execute file ( 0x00100020 ) List folder / read data ( 0x00100001 ) Read attributes ( 0x00100080 ) To calculate the hex value for the permissions, enter: Use the returned value when you set or update an ACE. 16.1.6.3.4. Adding, Updating, And Removing an ACL Using smbcacls Depending on the parameter you pass to the smbcacls utility, you can add, update, and remove ACLs from a file or directory. Adding an ACL To add an ACL to the root of the //server/example share that grants CHANGE permissions for This folder, subfolders, and files to the AD\Domain Users group: Updating an ACL Updating an ACL is similar to adding a new ACL. You update an ACL by overriding the ACL using the --modify parameter with an existing security principal. If smbcacls finds the security principal in the ACL list, the utility updates the permissions. Otherwise the command fails with an error: For example, to update the permissions of the AD\Domain Users group and set them to READ for This folder, subfolders, and files : Deleting an ACL To delete an ACL, pass the --delete with the exact ACL to the smbcacls utility. For example: 16.1.6.4. Enabling Users to Share Directories on a Samba Server On a Samba server, you can configure that users can share directories without root permissions. 16.1.6.4.1. Enabling the User Shares Feature Before users can share directories, the administrator must enable user shares in Samba. For example, to enable only members of the local example group to create user shares: Enabling User Shares Create the local example group, if it does not exist: Prepare the directory for Samba to store the user share definitions and set its permissions properly. For example: Create the directory: Set write permissions for the example group: Set the sticky bit to prevent users to rename or delete files stored by other users in this directory. Edit the /etc/samba/smb.conf file and add the following to the [global] section: Set the path to the directory you configured to store the user share definitions. For example: Set how many user shares Samba allows to be created on this server. For example: If you use the default of 0 for the usershare max shares parameter, user shares are disabled. Optionally, set a list of absolute directory paths. For example, to configure that Samba only allows to share subdirectories of the /data and /srv directory to be shared, set: For a list of further user share-related parameters you can set, see the USERSHARES section in the smb.conf (5) man page. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: Users are now able to create user shares. For details, see Section 16.1.6.4.2, "Adding a User Share" . 16.1.6.4.2. Adding a User Share After you configured Samba according to Section 16.1.6.4.1, "Enabling the User Shares Feature" , users can share directories on the Samba server without root permissions by running the net usershare add command. Synopsis of the net usershare add command: net usershare add share_namepathcommentACLsguest_ok=y|n Important If you set ACLs when you create a user share, you must specify the comment parameter prior to the ACLs. To set an empty comment, use an empty string in double quotes. Note that users can only enable guest access on a user share, if the administrator set usershare allow guests = yes in the [global] section in the /etc/samba/smb.conf file. Example 16.7. Adding a User Share A user wants to share the /srv/samba/ directory on a Samba server. The share should be named example , have no comment set, and should be accessible by guest users. Additionally, the share permissions should be set to full access for the AD\Domain Users group and read permissions for other users. To add this share, run as the user: 16.1.6.4.3. Updating Settings of a User Share If you want to update settings of a user share, override the share by using the net usershare add command with the same share name and the new settings. See Section 16.1.6.4.2, "Adding a User Share" . 16.1.6.4.4. Displaying Information About Existing User Shares Users can enter the net usershare info command on a Samba server to display user shares and their settings. To display all user shares created by any user: To list only shares created by the user who runs the command, omit the -l parameter. To display only the information about specific shares, pass the share name or wild cards to the command. For example, to display the information about shares whose name starts with share_ : 16.1.6.4.5. Listing User Shares If you want to list only the available user shares without their settings on a Samba server, use the net usershare list command. To list the shares created by any user: To list only shares created by the user who runs the command, omit the -l parameter. To list only specific shares, pass the share name or wild cards to the command. For example, to list only shares whose name starts with share_ : 16.1.6.4.6. Deleting a User Share To delete a user share, enter as the user who created the share or as the root user: 16.1.6.5. Enabling Guest Access to a Share In certain situations, you want to share a directory to which users can connect without authentication. To configure this, enable guest access on a share. Warning Shares that do not require authentication can be a security risk. If guest access is enabled on a share, Samba maps guest connections to the operating system account set in the guest account parameter. Guest users can access these files if at least one of the following conditions is satisfied: The account is listed in file system ACLs The POSIX permissions for other users allow it Example 16.8. Guest Share Permissions If you configured Samba to map the guest account to nobody , which is the default, the ACLs in the following example: Allow guest users to read file1.txt Allow guest users to read and modify file2.txt . Prevent guest users to read or modify file3.txt For example, to enable guest access for the existing [example] share: Setting up a Guest Share Edit the /etc/samba/smb.conf file: If this is the first guest share you set up on this server: Set map to guest = Bad User in the [global] section: With this setting, Samba rejects login attempts that use an incorrect password unless the user name does not exist. If the specified user name does not exist and guest access is enabled on a share, Samba treats the connection as a guest log in. By default, Samba maps the guest account to the nobody account on Red Hat Enterprise Linux. Optionally, you can set a different account. For example: The account set in this parameter must exist locally on the Samba server. For security reasons, Red Hat recommends using an account that does not have a valid shell assigned. Add the guest ok = yes setting to the [example] section: Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: 16.1.7. Setting up a Samba Print Server If you set up Samba as a print server, clients in your network can use Samba to print. Additionally, Windows clients can, if configured, download the driver from the Samba server. Before you can share a printer, set up Samba: Section 16.1.4, "Setting up Samba as a Standalone Server" Section 16.1.5, "Setting up Samba as a Domain Member" 16.1.7.1. The Samba spoolssd Service The Samba spoolssd is a service that is integrated into the smbd service. Enable spoolssd in the Samba configuration to significantly increase the performance on print servers with a high number of jobs or printers. Without spoolssd , Samba forks the smbd process and initializes the printcap cache for each print job. In case of a large number of printers, the smbd service can become unresponsive for multiple seconds while the cache is initialized. The spoolssd service enables you to start pre-forked smbd processes that are processing print jobs without any delays. The main spoolssd smbd process uses a low amount of memory, and forks and terminates child processes. To enable the spoolssd service: Enabling the spoolssd Service Edit the [global] section in the /etc/samba/smb.conf file: Add the following parameters: Optionally, you can set the following parameters: Parameter Default Description spoolssd:prefork_min_children 5 Minimum number of child processes spoolssd:prefork_max_children 25 Maximum number of child processes spoolssd:prefork_spawn_rate 5 Samba forks the number of new child processes set in this parameter, up to the value set in spoolssd:prefork_max_children , if a new connection is established spoolssd:prefork_max_allowed_clients 100 Number of clients, a child process serves spoolssd:prefork_child_min_life 60 Minimum lifetime of a child process in seconds. 60 seconds is the minimum. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Restart the smb service: After you restarted the service, Samba automatically starts smbd child processes: 16.1.7.2. Enabling Print Server Support in Samba To enable the print server support: Enabling Print Server Support in Samba On the Samba server, set up CUPS and add the printer to the CUPS back end. For details, see Section 16.3, "Print Settings" . Note Samba can only forward the print jobs to CUPS if CUPS is installed locally on the Samba print server. Edit the /etc/samba/smb.conf file: If you want to enable the spoolssd service, add the following parameters to the [global] section: For further details, see Section 16.1.7.1, "The Samba spoolssd Service" . To configure the printing back end, add the [printers] section: Important The printers share name is hard-coded and cannot be changed. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Open the required ports and reload the firewall configuration using the firewall-cmd utility: Restart the smb service: After restarting the service, Samba automatically shares all printers that are configured in the CUPS back end. If you want to manually share only specific printers, see Section 16.1.7.3, "Manually Sharing Specific Printers" . 16.1.7.3. Manually Sharing Specific Printers If you configured Samba as a print server, by default, Samba shares all printers that are configured in the CUPS back end. To share only specific printers: Manually Sharing a Specific Printer Edit the /etc/samba/smb.conf file: In the [global] section, disable automatic printer sharing by setting: Add a section for each printer you want to share. For example, to share the printer named example in the CUPS back end as Example-Printer in Samba, add the following section: You do not need individual spool directories for each printer. You can set the same spool directory in the path parameter for the printer as you set in the [printers] section. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: 16.1.7.4. Setting up Automatic Printer Driver Downloads for Windows Clients If you are running a Samba print server for Windows clients, you can upload drivers and preconfigure printers. If a user connects to a printer, Windows automatically downloads and installs the driver locally on the client. The user does not require local administrator permissions for the installation. Additionally, Windows applies preconfigured driver settings, such as the number of trays. Note Before setting up automatic printer driver download, must configure Samba as a print server and share a printer. For details, see Section 16.1.7, "Setting up a Samba Print Server" . 16.1.7.4.1. Basic Information about Printer Drivers This section provides general information about printer drivers. Supported Driver Model Version Samba only supports the printer driver model version 3 which is supported in Windows 2000 and later, and Windows Server 2000 and later. Samba does not support the driver model version 4, introduced in Windows 8 and Windows Server 2012. However, these and later Windows versions also support version 3 drivers. Package-aware Drivers Samba does not support package-aware drivers. Preparing a Printer Driver for Being Uploaded Before you can upload a driver to a Samba print server: Unpack the driver if it is provided in a compressed format. Some drivers require to start a setup application that installs the driver locally on a Windows host. In certain situations, the installer extracts the individual files into the operating system's temporary folder during the setup runs. To use the driver files for uploading: Start the installer. Copy the files from the temporary folder to a new location. Cancel the installation. Ask your printer manufacturer for drivers that support uploading to a print server. Providing 32-bit and 64-bit Drivers for a Printer to a Client To provide the driver for a printer for both 32-bit and 64-bit Windows clients, you must upload a driver with exactly the same name for both architectures. For example, if you are uploading the 32-bit driver named Example PostScript and the 64-bit driver named Example PostScript (v1.0) , the names do not match. Consequently, you can only assign one of the drivers to a printer and the driver will not be available for both architectures. 16.1.7.4.2. Enabling Users to Upload and Preconfigure Drivers To be able to upload and preconfigure printer drivers, a user or a group needs to have the SePrintOperatorPrivilege privilege granted. A user must be added into the printadmin group. Red Hat Enterprise Linux creates this group automatically when you install the samba package. The printadmin group gets assigned the lowest available dynamic system GID that is lower than 1000. To grant the SePrintOperatorPrivilege privilege to the printadmin group: Note In a domain environment, grant SePrintOperatorPrivilege to a domain group. This enables you to centrally manage the privilege by updating a user's group membership. To list all users and groups having SePrintOperatorPrivilege granted: 16.1.7.4.3. Setting up the printUSD Share Windows operating systems download printer drivers from a share named printUSD from a print server. This share name is hard-coded in Windows and cannot be changed. To share the /var/lib/samba/drivers/ directory as printUSD , and enable members of the local printadmin group to upload printer drivers: Setting up the printUSD Share Add the [printUSD] section to the /etc/samba/smb.conf file: Using these settings: Only members of the printadmin group can upload printer drivers to the share. The group of new created files and directories will be set to printadmin . The permissions of new files will be set to 664 . The permissions of new directories will be set to 2775 . To upload only 64-bit drivers for a printer, include this setting in the [global] section in the /etc/samba/smb.conf file: Without this setting, Windows only displays drivers for which you have uploaded at least the 32-bit version. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration Create the printadmin group if it does not exists: Grant the SePrintOperatorPrivilege privilege to the printadmin group. For further details, see Section 16.1.7.4.2, "Enabling Users to Upload and Preconfigure Drivers" . If you run SELinux in enforcing mode, set the samba_share_t context on the directory: Set the permissions on the /var/lib/samba/drivers/ directory: If you use POSIX ACLs, set: If you use Windows ACLs, set: Principal Access Apply to CREATOR OWNER Full control Subfolders and files only Authenticated Users Read & execute, List folder contents, Read This folder, subfolders and files printadmin Full control This folder, subfolders and files For details about setting ACLs on Windows, see your Windows documentation. 16.1.7.4.4. Creating a GPO to Enable Clients to Trust the Samba Print Server For security reasons, recent Windows operating systems prevent clients from downloading non-package-aware printer drivers from an untrusted server. If your print server is a member in an AD, you can create a Group Policy Object (GPO) in your domain to trust the Samba server. To create GPOs, the Windows computer you are using must have the Windows Remote Server Administration Tools (RSAT) installed. For details, see your Windows documentation. Creating a GPO to Enable Clients to Trust the Samba Print Server Log into a Windows computer using an account that is allowed to edit group policies, such as the AD domain Administrator user. Open the Group Policy Management Console. Right-click to your AD domain and select Create a GPO in this domain, and Link it here Enter a name for the GPO, such as Legacy printer Driver Policy and click OK . The new GPO will be displayed under the domain entry. Right-click to the newly-created GPO and select Edit to open the Group Policy Management Editor . Navigate to Computer Configuration Policies Administrative Templates Printers . On the right side of the window, double-click Point and Print Restriction to edit the policy: Enable the policy and set the following options: Select Users can only point and print to these servers and enter the fully-qualified domain name (FQDN) of the Samba print server to the field to this option. In both check boxes under Security Prompts , select Do not show warning or elevation prompt . Click OK . Double-click Package Point and Print - Approved servers to edit the policy: Enable the policy and click the Show button. Enter the FQDN of the Samba print server. Close both the Show Contents and policy properties window by clicking OK . Close the Group Policy Management Editor . Close the Group Policy Management Console. After the Windows domain members applied the group policy, printer drivers are automatically downloaded from the Samba server when a user connects to a printer. For further details about using group policies, see your Windows documentation. 16.1.7.4.5. Uploading Drivers and Preconfiguring Printers Use the Print Management application on a Windows client to upload drivers and preconfigure printers hosted on the Samba print server. For further details, see your Windows documentation. 16.1.8. Tuning the Performance of a Samba Server This section describes what settings can improve the performance of Samba in certain situations, and which settings can have a negative performance impact. 16.1.8.1. Setting the SMB Protocol Version Each new SMB version adds features and improves the performance of the protocol. The recent Windows and Windows Server operating systems always supports the latest protocol version. If Samba also uses the latest protocol version, Windows clients connecting to Samba benefit from the performance improvements. In Samba, the default value of the server max protocol is set to the latest supported stable SMB protocol version. To always have the latest stable SMB protocol version enabled, do not set the server max protocol parameter. If you set the parameter manually, you will need to modify the setting with each new version of the SMB protocol, to have the latest protocol version enabled. To unset, remove the server max protocol parameter from the [global] section in the /etc/samba/smb.conf file. 16.1.8.2. Tuning Shares with Directories That Contain a Large Number of Files To improve the performance of shares that contain directories with more than 100.000 files: Tuning Shares with Directories That Contain a Large Number of Files Rename all files on the share to lowercase. Note Using the settings in this procedure, files with names other than in lowercase will no longer be displayed. Set the following parameters in the share's section: For details about the parameters, see their descriptions in the smb.conf (5) man page. Reload the Samba configuration: After you applied these settings, the names of all newly created files on this share use lowercase. Because of these settings, Samba no longer needs to scan the directory for uppercase and lowercase, which improves the performance. 16.1.8.3. Settings That Can Have a Negative Performance Impact By default, the kernel in Red Hat Enterprise Linux is tuned for high network performance. For example, the kernel uses an auto-tuning mechanism for buffer sizes. Setting the socket options parameter in the /etc/samba/smb.conf file overrides these kernel settings. As a result, setting this parameter decreases the Samba network performance in most cases. To use the optimized settings from the Kernel, remove the socket options parameter from the [global] section in the /etc/samba/smb.conf . 16.1.9. Frequently Used Samba Command-line Utilities This section describes frequently used commands when working with a Samba server. 16.1.9.1. Using the net Utility The net utility enables you to perform several administration tasks on a Samba server. This section describes the most frequently used subcommands of the net utility. For further details, see the net (8) man page. 16.1.9.1.1. Using the net ads join and net rpc join Commands Using the join subcommand of the net utility, you can join Samba to an AD or NT4 domain. To join the domain, you must create the /etc/samba/smb.conf file manually, and optionally update additional configurations, such as PAM. Important Red Hat recommends using the realm utility to join a domain. The realm utility automatically updates all involved configuration files. For details, see Section 16.1.5.1, "Joining a Domain" . To join a domain using the net command: Joining a Domain Using the net Command Manually create the /etc/samba/smb.conf file with the following settings: For an AD domain member: For an NT4 domain member: Add an ID mapping configuration for the * default domain and for the domain you want to join to the [global] section in the /etc/samba/smb.conf . For details, see Section 16.1.5.3, "Understanding ID Mapping" . Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Join the domain as the domain administrator: To join an AD domain: To join an NT4 domain: Append the winbind source to the passwd and group database entry in the /etc/nsswitch.conf file: Enable and start the winbind service: Optionally, configure PAM using the authconf utility. For details, see the Using Pluggable Authentication Modules (PAM) section in the Red Hat System-Level Authentication Guide . Optionally for AD environments, configure the Kerberos client. For details, see the Configuring a Kerberos Client section in the Red Hat System-Level Authentication Guide . 16.1.9.1.2. Using the net rpc rights Command In Windows, you can assign privileges to accounts and groups to perform special operations, such as setting ACLs on a share or upload printer drivers. On a Samba server, you can use the net rpc rights command to manage privileges. Listing Privileges To list all available privileges and their owners, use the net rpc rights list command. For example: Granting Privileges To grant a privilege to an account or group, use the net rpc rights grant command. For example, grant the SePrintOperatorPrivilege privilege to the DOMAIN \printadmin group: Revoking Privileges To revoke a privilege from an account or group, use the net rpc rights revoke . For example, to revoke the SePrintOperatorPrivilege privilege from the DOMAIN \printadmin group: 16.1.9.1.3. Using the net rpc share Command The net rpc share command provides the capability to list, add, and remove shares on a local or remote Samba or Windows server. Listing Shares To list the shares on an SMB server, use the net rpc share list command . Optionally, pass the -S server_name parameter to the command to list the shares of a remote server. For example: Note Shares hosted on a Samba server that have browseable = no set in their section in the /etc/samba/smb.conf file are not displayed in the output. Adding a Share The net rpc share add command enables you to add a share to an SMB server. For example, to add a share named example on a remote Windows server that shares the C:\example\ directory: Note You must omit the trailing backslash in the path when specifying a Windows directory name. To use the command to add a share to a Samba server: The user specified in the -U parameter must have the SeDiskOperatorPrivilege privilege granted. You must write a script that adds a share section to the /etc/samba/smb.conf file and reloads Samba. The script must be set in the add share command parameter in the [global] section in /etc/samba/smb.conf . For further details, see the add share command description in the smb.conf (5) man page. Removing a Share The net rpc share delete command enables you to remove a share from an SMB server. For example, to remove the share named example from a remote Windows server: To use the command to remove a share from a Samba server: The user specified in the -U parameter must have the SeDiskOperatorPrivilege privilege granted. You must write a script that removes the share's section from the /etc/samba/smb.conf file and reloads Samba. The script must be set in the delete share command parameter in the [global] section in /etc/samba/smb.conf . For further details, see the delete share command description in the smb.conf (5) man page. 16.1.9.1.4. Using the net user Command The net user command enables you to perform the following actions on an AD DC or NT4 PDC: List all user accounts Add users Remove Users Note Specifying a connection method, such as ads for AD domains or rpc for NT4 domains, is only required when you list domain user accounts. Other user-related subcommands can auto-detect the connection method. Pass the -U user_name parameter to the command to specify a user that is allowed to perform the requested action. Listing Domain User Accounts To list all users in an AD domain: To list all users in an NT4 domain: Adding a User Account to the Domain On a Samba domain member, you can use the net user add command to add a user account to the domain. For example, add the user account to the domain: Adding a User Account to the Domain Add the account: Optionally, use the remote procedure call (RPC) shell to enable the account on the AD DC or NT4 PDC. For example: Deleting a User Account from the Domain On a Samba domain member, you can use the net user delete command to remove a user account from the domain. For example, to remove the user account from the domain: 16.1.9.1.5. Using the net usershare Command See Section 16.1.6.4, "Enabling Users to Share Directories on a Samba Server" . 16.1.9.2. Using the rpcclient Utility The rpcclient utility enables you to manually execute client-side Microsoft Remote Procedure Call (MS-RPC) functions on a local or remote SMB server. However, most of the features are integrated into separate utilities provided by Samba. Use rpcclient only for testing MS-PRC functions. For example, you can use the utility to: Manage the printer Spool Subsystem (SPOOLSS). Example 16.9. Assigning a Driver to a Printer Retrieve information about an SMB server. Example 16.10. Listing all File Shares and Shared Printers Perform actions using the Security Account Manager Remote (SAMR) protocol. Example 16.11. Listing Users on an SMB Server If you run the command against a standalone server or a domain member, it lists the users in the local database. Running the command against an AD DC or NT4 PDC lists the domain users. For a complete list of supported subcommands, see COMMANDS section in the rpcclient (1) man page. 16.1.9.3. Using the samba-regedit Application Certain settings, such as printer configurations, are stored in the registry on the Samba server. You can use the ncurses-based samba-regedit application to edit the registry of a Samba server. To start the application, enter: Use the following keys: Cursor up and cursor down: Navigate through the registry tree and the values. Enter : Opens a key or edits a value. Tab : Switches between the Key and Value pane. Ctrl + C : Closes the application. 16.1.9.4. Using the smbcacls Utility See Section 16.1.6.3, "Managing ACLs on an SMB Share Using smbcacls " . 16.1.9.5. Using the smbclient Utility The smbclient utility enables you to access file shares on an SMB server, similarly to a command-line FTP client. You can use it, for example, to upload and download files to and from a share. For example, to authenticate to the example share hosted on server using the DOMAIN\user account: After smbclient connected successfully to the share, the utility enters the interactive mode and shows the following prompt: To display all available commands in the interactive shell, enter: To display the help for a specific command, enter: For further details and descriptions of the commands available in the interactive shell, see the smbclient (1) man page. 16.1.9.5.1. Using smbclient in Interactive Mode If you use smbclient without the -c parameter, the utility enters the interactive mode. The following procedure shows how to connect to an SMB share and download a file from a subdirectory: Downloading a File from an SMB Share Using smbclient Connect to the share: Change into the /example/ directory: List the files in the directory: Download the example.txt file: Disconnect from the share: 16.1.9.5.2. Using smbclient in Scripting Mode If you pass the -c commands parameter to smbclient , you can automatically execute the commands on the remote SMB share. This enables you to use smbclient in scripts. The following command shows how to connect to an SMB share and download a file from a subdirectory: 16.1.9.6. Using the smbcontrol Utility The smbcontrol utility enables you to send command messages to the smbd , nmbd , winbindd , or all of these services. These control messages instruct the service, for example, to reload its configuration. Example 16.12. Reloading the Configuration of the smbd , nmbd , and winbindd Service For example, to reload the configuration of the smbd , nmbd , winbindd , send the reload-config message-type to the all destination: For further details and a list of available command message types, see the smbcontrol (1) man page. 16.1.9.7. Using the smbpasswd Utility The smbpasswd utility manages user accounts and passwords in the local Samba database. If you run the command as a user, smbpasswd changes the Samba password of the user. For example: If you run smbpasswd as the root user, you can use the utility, for example, to: Create a new user: Note Before you can add a user to the Samba database, you must create the account in the local operating system. See Section 4.3.1, "Adding a New User" Enable a Samba user: Disable a Samba user: Delete a user: For further details, see the smbpasswd (8) man page. 16.1.9.8. Using the smbstatus Utility The smbstatus utility reports on: Connections per PID of each smbd daemon to the Samba server. This report includes the user name, primary group, SMB protocol version, encryption, and signing information. Connections per Samba share. This report includes the PID of the smbd daemon, the IP of the connecting machine, the time stamp when the connection was established, encryption, and signing information. A list of locked files. The report entries include further details, such as opportunistic lock (oplock) types Example 16.13. Output of the smbstatus Utility For further details, see the smbstatus (1) man page. 16.1.9.9. Using the smbtar Utility The smbtar utility backs up the content of an SMB share or a subdirectory of it and stores the content in a tar archive. Alternatively, you can write the content to a tape device. For example, to back up the content of the demo directory on the //server/example/ share and store the content in the /root/example.tar archive: For further details, see the smbtar (1) man page. 16.1.9.10. Using the testparm Utility See Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . 16.1.9.11. Using the wbinfo Utility The wbinfo utility queries and returns information created and used by the winbindd service. Note The winbindd service must be configured and running to use wbinfo . You can use wbinfo , for example, to: List domain users: List domain groups: Display the SID of a user: Display information about domains and trusts: For further details, see the wbinfo (1) man page. 16.1.10. Additional Resources The Red Hat Samba packages include manual pages for all Samba commands and configuration files the package installs. For example, to display the man page of the /etc/samba/smb.conf file that explains all configuration parameters you can set in this file: /usr/share/docs/samba- version / : Contains general documentation, example scripts, and LDAP schema files, provided by the Samba project. Red Hat Cluster Storage Administration Guide : Provides information about setting up Samba and the Clustered Trivial Database (CDTB) to share directories stored on an GlusterFS volume. The An active/active Samba Server in a Red Hat High Availability Cluster chapter in the Red Hat Enterprise Linux High Availability Add-on Administration guide describes how to up a Samba high-availability installation. For details about mounting an SMB share on Red Hat Enterprise Linux, see the corresponding section in the Red Hat Storage Administration Guide . 16.2. FTP The File Transfer Protocol ( FTP ) is one of the oldest and most commonly used protocols found on the Internet today. Its purpose is to reliably transfer files between computer hosts on a network without requiring the user to log directly in to the remote host or to have knowledge of how to use the remote system. It allows users to access files on remote systems using a standard set of simple commands. This section outlines the basics of the FTP protocol and introduces vsftpd , which is the preferred FTP server in Red Hat Enterprise Linux. 16.2.1. The File Transfer Protocol FTP uses a client-server architecture to transfer files using the TCP network protocol. Because FTP is a rather old protocol, it uses unencrypted user name and password authentication. For this reason, it is considered an insecure protocol and should not be used unless absolutely necessary. However, because FTP is so prevalent on the Internet, it is often required for sharing files to the public. System administrators, therefore, should be aware of FTP 's unique characteristics. This section describes how to configure vsftpd to establish connections secured by TLS and how to secure an FTP server with the help of SELinux . A good substitute for FTP is sftp from the OpenSSH suite of tools. For information about configuring OpenSSH and about the SSH protocol in general, refer to Chapter 12, OpenSSH . Unlike most protocols used on the Internet, FTP requires multiple network ports to work properly. When an FTP client application initiates a connection to an FTP server, it opens port 21 on the server - known as the command port . This port is used to issue all commands to the server. Any data requested from the server is returned to the client via a data port . The port number for data connections, and the way in which data connections are initialized, vary depending upon whether the client requests the data in active or passive mode. The following defines these modes: active mode Active mode is the original method used by the FTP protocol for transferring data to the client application. When an active-mode data transfer is initiated by the FTP client, the server opens a connection from port 20 on the server to the IP address and a random, unprivileged port (greater than 1024) specified by the client. This arrangement means that the client machine must be allowed to accept connections over any port above 1024. With the growth of insecure networks, such as the Internet, the use of firewalls for protecting client machines is now prevalent. Because these client-side firewalls often deny incoming connections from active-mode FTP servers, passive mode was devised. passive mode Passive mode, like active mode, is initiated by the FTP client application. When requesting data from the server, the FTP client indicates it wants to access the data in passive mode and the server provides the IP address and a random, unprivileged port (greater than 1024) on the server. The client then connects to that port on the server to download the requested information. While passive mode does resolve issues for client-side firewall interference with data connections, it can complicate administration of the server-side firewall. You can reduce the number of open ports on a server by limiting the range of unprivileged ports on the FTP server. This also simplifies the process of configuring firewall rules for the server. 16.2.2. The vsftpd Server The Very Secure FTP Daemon ( vsftpd ) is designed from the ground up to be fast, stable, and, most importantly, secure. vsftpd is the only stand-alone FTP server distributed with Red Hat Enterprise Linux, due to its ability to handle large numbers of connections efficiently and securely. The security model used by vsftpd has three primary aspects: Strong separation of privileged and non-privileged processes - Separate processes handle different tasks, and each of these processes runs with the minimal privileges required for the task. Tasks requiring elevated privileges are handled by processes with the minimal privilege necessary - By taking advantage of compatibilities found in the libcap library, tasks that usually require full root privileges can be executed more safely from a less privileged process. Most processes run in a chroot jail - Whenever possible, processes are change-rooted to the directory being shared; this directory is then considered a chroot jail. For example, if the /var/ftp/ directory is the primary shared directory, vsftpd reassigns /var/ftp/ to the new root directory, known as / . This disallows any potential malicious hacker activities for any directories not contained in the new root directory. Use of these security practices has the following effect on how vsftpd deals with requests: The parent process runs with the least privileges required - The parent process dynamically calculates the level of privileges it requires to minimize the level of risk. Child processes handle direct interaction with the FTP clients and run with as close to no privileges as possible. All operations requiring elevated privileges are handled by a small parent process - Much like the Apache HTTP Server , vsftpd launches unprivileged child processes to handle incoming connections. This allows the privileged, parent process to be as small as possible and handle relatively few tasks. All requests from unprivileged child processes are distrusted by the parent process - Communication with child processes is received over a socket, and the validity of any information from child processes is checked before being acted on. Most interactions with FTP clients are handled by unprivileged child processes in a chroot jail - Because these child processes are unprivileged and only have access to the directory being shared, any crashed processes only allow the attacker access to the shared files. 16.2.2.1. Starting and Stopping vsftpd To start the vsftpd service in the current session, type the following at a shell prompt as root : To stop the service in the current session, type as root : To restart the vsftpd service, run the following command as root : This command stops and immediately starts the vsftpd service, which is the most efficient way to make configuration changes take effect after editing the configuration file for this FTP server. Alternatively, you can use the following command to restart the vsftpd service only if it is already running: By default, the vsftpd service does not start automatically at boot time. To configure the vsftpd service to start at boot time, type the following at a shell prompt as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 16.2.2.2. Starting Multiple Copies of vsftpd Sometimes, one computer is used to serve multiple FTP domains. This is a technique called multihoming . One way to multihome using vsftpd is by running multiple copies of the daemon, each with its own configuration file. To do this, first assign all relevant IP addresses to network devices or alias network devices on the system. For more information about configuring network devices, device aliases, and additional information about network configuration scripts, see the Red Hat Enterprise Linux 7 Networking Guide . , the DNS server for the FTP domains must be configured to reference the correct machine. For information about BIND , the DNS protocol implementation used in Red Hat Enterprise Linux, and its configuration files, see the Red Hat Enterprise Linux 7 Networking Guide . For vsftpd to answer requests on different IP addresses, multiple copies of the daemon must be running. To facilitate launching multiple instances of the vsftpd daemon, a special systemd service unit ( [email protected] ) for launching vsftpd as an instantiated service is supplied in the vsftpd package. In order to make use of this service unit, a separate vsftpd configuration file for each required instance of the FTP server must be created and placed in the /etc/vsftpd/ directory. Note that each of these configuration files must have a unique name (such as /etc/vsftpd/ vsftpd-site-2 .conf ) and must be readable and writable only by the root user. Within each configuration file for each FTP server listening on an IPv4 network, the following directive must be unique: Replace N.N.N.N with a unique IP address for the FTP site being served. If the site is using IPv6 , use the listen_address6 directive instead. Once there are multiple configuration files present in the /etc/vsftpd/ directory, individual instances of the vsftpd daemon can be started by executing the following command as root : In the above command, replace configuration-file-name with the unique name of the requested server's configuration file, such as vsftpd-site-2 . Note that the configuration file's .conf extension should not be included in the command. If you want to start several instances of the vsftpd daemon at once, you can make use of a systemd target unit file ( vsftpd.target ), which is supplied in the vsftpd package. This systemd target causes an independent vsftpd daemon to be launched for each available vsftpd configuration file in the /etc/vsftpd/ directory. Execute the following command as root to enable the target: The above command configures the systemd service manager to launch the vsftpd service (along with the configured vsftpd server instances) at boot time. To start the service immediately, without rebooting the system, execute the following command as root : See Section 10.3, "Working with systemd Targets" for more information on how to use systemd targets to manage services. Other directives to consider altering on a per-server basis are: anon_root local_root vsftpd_log_file xferlog_file 16.2.2.3. Encrypting vsftpd Connections Using TLS In order to counter the inherently insecure nature of FTP , which transmits user names, passwords, and data without encryption by default, the vsftpd daemon can be configured to utilize the TLS protocol to authenticate connections and encrypt all transfers. Note that an FTP client that supports TLS is needed to communicate with vsftpd with TLS enabled. Note SSL (Secure Sockets Layer) is the name of an older implementation of the security protocol. The new versions are called TLS (Transport Layer Security). Only the newer versions ( TLS ) should be used as SSL suffers from serious security vulnerabilities. The documentation included with the vsftpd server, as well as the configuration directives used in the vsftpd.conf file, use the SSL name when referring to security-related matters, but TLS is supported and used by default when the ssl_enable directive is set to YES . Set the ssl_enable configuration directive in the vsftpd.conf file to YES to turn on TLS support. The default settings of other TLS -related directives that become automatically active when the ssl_enable option is enabled provide for a reasonably well-configured TLS set up. This includes, among other things, the requirement to only use the TLS v1 protocol for all connections (the use of the insecure SSL protocol versions is disabled by default) or forcing all non-anonymous logins to use TLS for sending passwords and data transfers. Example 16.14. Configuring vsftpd to Use TLS In this example, the configuration directives explicitly disable the older SSL versions of the security protocol in the vsftpd.conf file: Restart the vsftpd service after you modify its configuration: See the vsftpd.conf (5) manual page for other TLS -related configuration directives for fine-tuning the use of TLS by vsftpd . 16.2.2.4. SELinux Policy for vsftpd The SELinux policy governing the vsftpd daemon (as well as other ftpd processes), defines a mandatory access control, which, by default, is based on least access required. In order to allow the FTP daemon to access specific files or directories, appropriate labels need to be assigned to them. For example, in order to be able to share files anonymously, the public_content_t label must be assigned to the files and directories to be shared. You can do this using the chcon command as root : In the above command, replace /path/to/directory with the path to the directory to which you want to assign the label. Similarly, if you want to set up a directory for uploading files, you need to assign that particular directory the public_content_rw_t label. In addition to that, the allow_ftpd_anon_write SELinux Boolean option must be set to 1 . Use the setsebool command as root to do that: If you want local users to be able to access their home directories through FTP , which is the default setting on Red Hat Enterprise Linux 7, the ftp_home_dir Boolean option needs to be set to 1 . If vsftpd is to be allowed to run in standalone mode, which is also enabled by default on Red Hat Enterprise Linux 7, the ftpd_is_daemon option needs to be set to 1 as well. See the ftpd_selinux (8) manual page for more information, including examples of other useful labels and Boolean options, on how to configure the SELinux policy pertaining to FTP . Also, see the Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide for more detailed information about SELinux in general. 16.2.3. Additional Resources For more information about vsftpd , see the following resources. 16.2.3.1. Installed Documentation The /usr/share/doc/vsftpd- version-number / directory - Replace version-number with the installed version of the vsftpd package. This directory contains a README file with basic information about the software. The TUNING file contains basic performance-tuning tips and the SECURITY/ directory contains information about the security model employed by vsftpd . vsftpd -related manual pages - There are a number of manual pages for the daemon and the configuration files. The following lists some of the more important manual pages. Server Applications vsftpd (8) - Describes available command-line options for vsftpd . Configuration Files vsftpd.conf (5) - Contains a detailed list of options available within the configuration file for vsftpd . hosts_access (5) - Describes the format and options available within the TCP wrappers configuration files: hosts.allow and hosts.deny . Interaction with SELinux ftpd_selinux (8) - Contains a description of the SELinux policy governing ftpd processes as well as an explanation of the way SELinux labels need to be assigned and Booleans set. 16.2.3.2. Online Documentation About vsftpd and FTP in General http://vsftpd.beasts.org/ - The vsftpd project page is a great place to locate the latest documentation and to contact the author of the software. http://slacksite.com/other/ftp.html - This website provides a concise explanation of the differences between active and passive-mode FTP . Red Hat Enterprise Linux Documentation Red Hat Enterprise Linux 7 Networking Guide - The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces, networks, and network services in this system. It provides an introduction to the hostnamectl utility and explains how to use it to view and set host names on the command line, both locally and remotely. Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services such as the Apache HTTP Server , Postfix , PostgreSQL , or OpenShift . It explains how to configure SELinux access permissions for system services managed by systemd . Red Hat Enterprise Linux 7 Security Guide - The Security Guide for Red Hat Enterprise Linux 7 assists users and administrators in learning the processes and practices of securing their workstations and servers against local and remote intrusion, exploitation, and malicious activity. It also explains how to secure critical system services. Relevant RFC Documents RFC 0959 - The original Request for Comments ( RFC ) of the FTP protocol from the IETF . RFC 1123 - The small FTP -related section extends and clarifies RFC 0959. RFC 2228 - FTP security extensions. vsftpd implements the small subset needed to support TLS and SSL connections. RFC 2389 - Proposes FEAT and OPTS commands. RFC 2428 - IPv6 support. 16.3. Print Settings The Print Settings tool serves for printer configuring, maintenance of printer configuration files, print spool directories and print filters, and printer classes management. The tool is based on the Common Unix Printing System ( CUPS ). If you upgraded the system from a Red Hat Enterprise Linux version that used CUPS, the upgrade process preserved the configured printers. Important The cupsd.conf man page documents configuration of a CUPS server. It includes directives for enabling SSL support. However, CUPS does not allow control of the protocol versions used. Due to the vulnerability described in Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) for components that do not allow SSLv3 to be disabled via configuration settings , Red Hat recommends that you do not rely on this for security. It is recommend that you use stunnel to provide a secure tunnel and disable SSLv3 . For more information on using stunnel , see the Red Hat Enterprise Linux 7 Security Guide . For ad-hoc secure connections to a remote system's Print Settings tool, use X11 forwarding over SSH as described in Section 12.4.1, "X11 Forwarding" . Note You can perform the same and additional operations on printers directly from the CUPS web application or command line. To access the application, in a web browser, go to http://localhost:631/ . For CUPS manuals refer to the links on the Home tab of the web site. 16.3.1. Starting the Print Settings Configuration Tool With the Print Settings configuration tool you can perform various operations on existing printers and set up new printers. You can also use CUPS directly (go to http://localhost:631/ to access the CUPS web application). To start the Print Settings tool from the command line, type system-config-printer at a shell prompt. The Print Settings tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type Print Settings and then press Enter . The Print Settings tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . The Print Settings window depicted in Figure 16.1, "Print Settings window" appears. Figure 16.1. Print Settings window 16.3.2. Starting Printer Setup Printer setup process varies depending on the printer queue type. If you are setting up a local printer connected with USB, the printer is discovered and added automatically. You will be prompted to confirm the packages to be installed and provide an administrator or the root user password. Local printers connected with other port types and network printers need to be set up manually. Follow this procedure to start a manual printer setup: Start the Print Settings tool (refer to Section 16.3.1, "Starting the Print Settings Configuration Tool" ). Go to Server New Printer . In the Authenticate dialog box, enter an administrator or root user password. If this is the first time you have configured a remote printer you will be prompted to authorize an adjustment to the firewall. Select the printer connection type and provide its details in the area on the right. 16.3.3. Adding a Local Printer Follow this procedure to add a local printer connected with other than a serial port: Open the Add printer dialog (refer to Section 16.3.2, "Starting Printer Setup" ). If the device does not appear automatically, select the port to which the printer is connected in the list on the left (such as Serial Port #1 or LPT #1 ). On the right, enter the connection properties: for Other URI (for example file:/dev/lp0) for Serial Port Baud Rate Parity Data Bits Flow Control Figure 16.2. Adding a local printer Click Forward . Select the printer model. See Section 16.3.8, "Selecting the Printer Model and Finishing" for details. 16.3.4. Adding an AppSocket/HP JetDirect printer Follow this procedure to add an AppSocket/HP JetDirect printer: Open the New Printer dialog (refer to Section 16.3.1, "Starting the Print Settings Configuration Tool" ). In the list on the left, select Network Printer AppSocket/HP JetDirect . On the right, enter the connection settings: Hostname Printer host name or IP address. Port Number Printer port listening for print jobs ( 9100 by default). Figure 16.3. Adding a JetDirect printer Click Forward . Select the printer model. See Section 16.3.8, "Selecting the Printer Model and Finishing" for details. 16.3.5. Adding an IPP Printer An IPP printer is a printer attached to a different system on the same TCP/IP network. The system this printer is attached to may either be running CUPS or simply configured to use IPP . If a firewall is enabled on the printer server, then the firewall must be configured to allow incoming TCP connections on port 631 . Note that the CUPS browsing protocol allows client machines to discover shared CUPS queues automatically. To enable this, the firewall on the client machine must be configured to allow incoming UDP packets on port 631 . Follow this procedure to add an IPP printer: Open the New Printer dialog (refer to Section 16.3.2, "Starting Printer Setup" ). In the list of devices on the left, select Network Printer and Internet Printing Protocol (ipp) or Internet Printing Protocol (https) . On the right, enter the connection settings: Host The host name of the IPP printer. Queue The queue name to be given to the new queue (if the box is left empty, a name based on the device node will be used). Figure 16.4. Adding an IPP printer Click Forward to continue. Select the printer model. See Section 16.3.8, "Selecting the Printer Model and Finishing" for details. 16.3.6. Adding an LPD/LPR Host or Printer Follow this procedure to add an LPD/LPR host or printer: Open the New Printer dialog (refer to Section 16.3.2, "Starting Printer Setup" ). In the list of devices on the left, select Network Printer LPD/LPR Host or Printer . On the right, enter the connection settings: Host The host name of the LPD/LPR printer or host. Optionally, click Probe to find queues on the LPD host. Queue The queue name to be given to the new queue (if the box is left empty, a name based on the device node will be used). Figure 16.5. Adding an LPD/LPR printer Click Forward to continue. Select the printer model. See Section 16.3.8, "Selecting the Printer Model and Finishing" for details. 16.3.7. Adding a Samba (SMB) printer Follow this procedure to add a Samba printer: Note Note that in order to add a Samba printer, you need to have the samba-client package installed. You can do so by running, as root : For more information on installing packages with Yum, refer to Section 9.2.4, "Installing Packages" . Open the New Printer dialog (refer to Section 16.3.2, "Starting Printer Setup" ). In the list on the left, select Network Printer Windows Printer via SAMBA . Enter the SMB address in the smb:// field. Use the format computer name/printer share . In Figure 16.6, "Adding a SMB printer" , the computer name is dellbox and the printer share is r2 . Figure 16.6. Adding a SMB printer Click Browse to see the available workgroups/domains. To display only queues of a particular host, type in the host name (NetBios name) and click Browse . Select either of the options: Prompt user if authentication is required : user name and password are collected from the user when printing a document. Set authentication details now : provide authentication information now so it is not required later. In the Username field, enter the user name to access the printer. This user must exist on the SMB system, and the user must have permission to access the printer. The default user name is typically guest for Windows servers, or nobody for Samba servers. Enter the Password (if required) for the user specified in the Username field. Warning Samba printer user names and passwords are stored in the printer server as unencrypted files readable by root and the Linux Printing Daemon, lpd . Thus, other users that have root access to the printer server can view the user name and password you use to access the Samba printer. Therefore, when you choose a user name and password to access a Samba printer, it is advisable that you choose a password that is different from what you use to access your local Red Hat Enterprise Linux system. If there are files shared on the Samba print server, it is recommended that they also use a password different from what is used by the print queue. Click Verify to test the connection. Upon successful verification, a dialog box appears confirming printer share accessibility. Click Forward . Select the printer model. See Section 16.3.8, "Selecting the Printer Model and Finishing" for details. 16.3.8. Selecting the Printer Model and Finishing Once you have properly selected a printer connection type, the system attempts to acquire a driver. If the process fails, you can locate or search for the driver resources manually. Follow this procedure to provide the printer driver and finish the installation: In the window displayed after the automatic driver detection has failed, select one of the following options: Select a Printer from database - the system chooses a driver based on the selected make of your printer from the list of Makes . If your printer model is not listed, choose Generic . Provide PPD file - the system uses the provided PostScript Printer Description ( PPD ) file for installation. A PPD file may also be delivered with your printer as being normally provided by the manufacturer. If the PPD file is available, you can choose this option and use the browser bar below the option description to select the PPD file. Search for a printer driver to download - enter the make and model of your printer into the Make and model field to search on OpenPrinting.org for the appropriate packages. Figure 16.7. Selecting a printer brand Depending on your choice provide details in the area displayed below: Printer brand for the Select printer from database option. PPD file location for the Provide PPD file option. Printer make and model for the Search for a printer driver to download option. Click Forward to continue. If applicable for your option, window shown in Figure 16.8, "Selecting a printer model" appears. Choose the corresponding model in the Models column on the left. Note On the right, the recommended printer driver is automatically selected; however, you can select another available driver. The print driver processes the data that you want to print into a format the printer can understand. Since a local printer is attached directly to your computer, you need a printer driver to process the data that is sent to the printer. Figure 16.8. Selecting a printer model Click Forward . Under the Describe Printer enter a unique name for the printer in the Printer Name field. The printer name can contain letters, numbers, dashes (-), and underscores ( ); it _must not contain any spaces. You can also use the Description and Location fields to add further printer information. Both fields are optional, and may contain spaces. Figure 16.9. Printer setup Click Apply to confirm your printer configuration and add the print queue if the settings are correct. Click Back to modify the printer configuration. After the changes are applied, a dialog box appears allowing you to print a test page. Click Yes to print a test page now. Alternatively, you can print a test page later as described in Section 16.3.9, "Printing a Test Page" . 16.3.9. Printing a Test Page After you have set up a printer or changed a printer configuration, print a test page to make sure the printer is functioning properly: Right-click the printer in the Printing window and click Properties . In the Properties window, click Settings on the left. On the displayed Settings tab, click the Print Test Page button. 16.3.10. Modifying Existing Printers To delete an existing printer, in the Print Settings window, select the printer and go to Printer Delete . Confirm the printer deletion. Alternatively, press the Delete key. To set the default printer, right-click the printer in the printer list and click the Set as Default button in the context menu. 16.3.10.1. The Settings Page To change printer driver configuration, double-click the corresponding name in the Printer list and click the Settings label on the left to display the Settings page. You can modify printer settings such as make and model, print a test page, change the device location (URI), and more. Figure 16.10. Settings page 16.3.10.2. The Policies Page Click the Policies button on the left to change settings in printer state and print output. You can select the printer states, configure the Error Policy of the printer (you can decide to abort the print job, retry, or stop it if an error occurs). You can also create a banner page (a page that describes aspects of the print job such as the originating printer, the user name from the which the job originated, and the security status of the document being printed): click the Starting Banner or Ending Banner drop-down menu and choose the option that best describes the nature of the print jobs (for example, confidential ). 16.3.10.2.1. Sharing Printers On the Policies page, you can mark a printer as shared: if a printer is shared, users published on the network can use it. To allow the sharing function for printers, go to Server Settings and select Publish shared printers connected to this system . Figure 16.11. Policies page Make sure that the firewall allows incoming TCP connections to port 631 , the port for the Network Printing Server ( IPP ) protocol. To allow IPP traffic through the firewall on Red Hat Enterprise Linux 7, make use of firewalld 's IPP service. To do so, proceed as follows: Enabling IPP Service in firewalld To start the graphical firewall-config tool, press the Super key to enter the Activities Overview, type firewall and then press Enter . The Firewall Configuration window opens. You will be prompted for an administrator or root password. Alternatively, to start the graphical firewall configuration tool using the command line, enter the following command as root user: The Firewall Configuration window opens. Look for the word "Connected" in the lower left corner. This indicates that the firewall-config tool is connected to the user space daemon, firewalld . To immediately change the current firewall settings, ensure the drop-down selection menu labeled Configuration is set to Runtime . Alternatively, to edit the settings to be applied at the system start, or firewall reload, select Permanent from the drop-down list. Select the Zones tab and then select the firewall zone to correspond with the network interface to be used. The default is the public zone. The Interfaces tab shows what interfaces have been assigned to a zone. Select the Services tab and then select the ipp service to enable sharing. The ipp-client service is required for accessing network printers. Close the firewall-config tool. For more information on opening and closing ports in firewalld , see the Red Hat Enterprise Linux 7 Security Guide . 16.3.10.2.2. The Access Control Page You can change user-level access to the configured printer on the Access Control page. Click the Access Control label on the left to display the page. Select either Allow printing for everyone except these users or Deny printing for everyone except these users and define the user set below: enter the user name in the text box and click the Add button to add the user to the user set. Figure 16.12. Access Control page 16.3.10.2.3. The Printer Options Page The Printer Options page contains various configuration options for the printer media and output, and its content may vary from printer to printer. It contains general printing, paper, quality, and printing size settings. Figure 16.13. Printer Options page 16.3.10.2.4. Job Options Page On the Job Options page, you can detail the printer job options. Click the Job Options label on the left to display the page. Edit the default settings to apply custom job options, such as number of copies, orientation, pages per side, scaling (increase or decrease the size of the printable area, which can be used to fit an oversize print area onto a smaller physical sheet of print medium), detailed text options, and custom job options. Figure 16.14. Job Options page 16.3.10.2.5. Ink/Toner Levels Page The Ink/Toner Levels page contains details on toner status if available and printer status messages. Click the Ink/Toner Levels label on the left to display the page. Figure 16.15. Ink/Toner Levels page 16.3.10.3. Managing Print Jobs When you send a print job to the printer daemon, such as printing a text file from Emacs or printing an image from GIMP , the print job is added to the print spool queue. The print spool queue is a list of print jobs that have been sent to the printer and information about each print request, such as the status of the request, the job number, and more. During the printing process, the Printer Status icon appears in the Notification Area on the panel. To check the status of a print job, click the Printer Status , which displays a window similar to Figure 16.16, "GNOME Print Status" . Figure 16.16. GNOME Print Status To cancel, hold, release, reprint or authenticate a print job, select the job in the GNOME Print Status and on the Job menu, click the respective command. To view the list of print jobs in the print spool from a shell prompt, type the command lpstat -o . The last few lines look similar to the following: Example 16.15. Example of lpstat -o output If you want to cancel a print job, find the job number of the request with the command lpstat -o and then use the command cancel job number . For example, cancel 60 would cancel the print job in Example 16.15, "Example of lpstat -o output" . You cannot cancel print jobs that were started by other users with the cancel command. However, you can enforce deletion of such job by issuing the cancel -U root job_number command. To prevent such canceling, change the printer operation policy to Authenticated to force root authentication. You can also print a file directly from a shell prompt. For example, the command lp sample.txt prints the text file sample.txt . The print filter determines what type of file it is and converts it into a format the printer can understand. 16.3.11. Additional Resources To learn more about printing on Red Hat Enterprise Linux, see the following resources. Installed Documentation lp(1) - The manual page for the lp command that allows you to print files from the command line. lpr(1) - The manual page for the lpr command that allows you to print files from the command line. cancel(1) - The manual page for the command-line utility to remove print jobs from the print queue. mpage(1) - The manual page for the command-line utility to print multiple pages on one sheet of paper. cupsd(8) - The manual page for the CUPS printer daemon. cupsd.conf(5) - The manual page for the CUPS printer daemon configuration file. classes.conf(5) - The manual page for the class configuration file for CUPS. lpstat(1) - The manual page for the lpstat command, which displays status information about classes, jobs, and printers. Online Documentation http://www.linuxprinting.org/ - The OpenPrinting group on the Linux Foundation website contains a large amount of information about printing in Linux. http://www.cups.org/ - The CUPS website provides documentation, FAQs, and newsgroups about CUPS.
[ "~]# testparm Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Unknown parameter encountered: \"log levell\" Processing section \"[example_share]\" Loaded services file OK. ERROR: The idmap range for the domain * (tdb) overlaps with the range of DOMAIN (ad)! Server role: ROLE_DOMAIN_MEMBER Press enter to see a dump of your service definitions Global parameters [global] [example_share]", "~]# yum install samba", "[global] workgroup = Example-WG netbios name = Server security = user log file = /var/log/samba/%m.log log level = 1", "~]# testparm", "~]# firewall-cmd --permanent --add-port={139/tcp,445/tcp} ~]# firewall-cmd --reload", "~]# systemctl start smb", "~]# systemctl enable smb", "~]# useradd -M -s /sbin/nologin example", "~]# passwd example Enter new UNIX password: password Retype new UNIX password: password passwd: password updated successfully", "~]# smbpasswd -a example New SMB password: password Retype new SMB password: password Added user example .", "~]# smbpasswd -e example Enabled user example .", "~]# yum install realmd oddjob-mkhomedir oddjob samba-winbind-clients samba-winbind samba-common-tools", "~]# yum install samba", "~]# yum install samba-winbind-krb5-locator", "~]# mv /etc/samba/smb.conf /etc/samba/smb.conf.old", "~]# realm join --membership-software=samba --client-software=winbind ad.example.com", "~]# systemctl status winbind", "~]# systemctl start smb", "~]# getent passwd AD\\\\administrator AD\\administrator:*:10000:10000::/home/administrator@AD:/bin/bash", "~]# getent group \"AD\\\\Domain Users\" AD\\domain users:x:10000:user", "~]# chown \"AD\\administrator\":\"AD\\Domain Users\" /srv/samba/example.txt", "~]# kinit [email protected]", "~]# klist Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 11.09.2017 14:46:21 12.09.2017 00:46:21 krbtgt/[email protected] renew until 18.09.2017 14:46:19", "~]# wbinfo --all-domains", "~]# wbinfo --all-domains BUILTIN SAMBA-SERVER AD", "[global] idmap config * : backend = tdb idmap config * : range = 10000-999999 idmap config AD-DOM :backend = rid idmap config AD-DOM :range = 2000000-2999999 idmap config TRUST-DOM :backend = rid idmap config TRUST-DOM :range = 4000000-4999999", "idmap config * : backend = tdb idmap config * : range = 10000-999999", "idmap config * : backend = autorid idmap config * : range = 10000-999999", "idmap config * : backend = tdb idmap config * : range = 10000-999999", "idmap config * : backend = tdb idmap config * : range = 10000-999999", "idmap config DOMAIN : backend = ad", "idmap config DOMAIN : range = 2000000-2999999", "idmap config DOMAIN : schema_mode = rfc2307", "idmap config DOMAIN : unix_nss_info = yes", "template shell = /bin/bash template homedir = /home/%U", "idmap config DOMAIN : unix_primary_group = yes", "~]# testparm", "~]# smbcontrol all reload-config", "idmap config * : backend = tdb idmap config * : range = 10000-999999", "idmap config DOMAIN : backend = rid", "idmap config DOMAIN : range = 2000000-2999999", "template shell = /bin/bash template homedir = /home/%U", "~]# testparm", "~]# smbcontrol all reload-config", "idmap config * : backend = autorid", "idmap config * : range = 10000-999999", "idmap config * : rangesize = 200000", "template shell = /bin/bash template homedir = /home/%U", "~]# testparm", "~]# smbcontrol all reload-config", "~]# mkdir -p /srv/samba/example/", "~]# semanage fcontext -a -t samba_share_t \"/srv/samba/example(/.*)?\" ~]# restorecon -Rv /srv/samba/example/", "[example] path = /srv/samba/example/ read only = no", "~]# testparm", "~]# firewall-cmd --permanent --add-service=samba ~]# firewall-cmd --reload", "~]# systemctl restart smb", "~]# systemctl enable smb", "~]# chown root:\"Domain Users\" /srv/samba/example/ ~]# chmod 2770 /srv/samba/example/", "inherit acls = yes", "~]# systemctl restart smb", "~]# systemctl enable smb", "~]# setfacl -m group::--- /srv/samba/example/ ~]# setfacl -m default:group::--- /srv/samba/example/", "~]# setfacl -m group:\" DOMAIN \\Domain Admins\":rwx /srv/samba/example/", "~]# setfacl -m group:\" DOMAIN \\Domain Users\":r-x /srv/samba/example/", "~]# setfacl -R -m other::--- /srv/samba/example/", "~]# setfacl -m default:group:\" DOMAIN \\Domain Admins\":rwx /srv/samba/example/ ~]# setfacl -m default:group:\" DOMAIN \\Domain Users\":r-x /srv/samba/example/ ~]# setfacl -m default:other::--- /srv/samba/example/", "valid users = + DOMAIN \\\"Domain Users\" invalid users = DOMAIN \\user", "hosts allow = 127.0.0.1 192.0.2.0/24 client1.example.com hosts deny = client2.example.com", "~]# smbcontrol all reload-config", "~]# net rpc rights grant \" DOMAIN \\Domain Admins\" SeDiskOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.", "~]# net rpc rights list privileges SeDiskOperatorPrivilege -U \" DOMAIN \\administrator\" Enter administrator's password: SeDiskOperatorPrivilege: BUILTIN\\Administrators DOMAIN \\Domain Admins", "vfs objects = acl_xattr map acl inherit = yes store dos attributes = yes", "~]# mkdir -p /srv/samba/example/", "~]# semanage fcontext -a -t samba_share_t \"/srv/samba/example(/.*)?\" ~]# restorecon -Rv /srv/samba/example/", "[example] path = /srv/samba/example/ read only = no", "vfs objects = acl_xattr map acl inherit = yes store dos attributes = yes", "~]# testparm", "~]# firewall-cmd --permanent --add-service=samba ~]# firewall-cmd --reload", "~]# systemctl restart smb", "~]# systemctl enable smb", "security_principal : access_right / inheritance_information / permissions", "AD\\Domain Users:ALLOWED/OI|CI/CHANGE", "~]# smbcacls //server/example / -U \" DOMAIN pass:quotes[ administrator ]\" Enter DOMAIN pass:quotes[ administrator ]'s password: REVISION:1 CONTROL:SR|PD|DI|DP OWNER:AD\\Administrators GROUP:AD\\Domain Users ACL:AD\\Administrator:ALLOWED/OI|CI/FULL ACL:AD\\Domain Users:ALLOWED/OI|CI/CHANGE ACL:AD\\Domain Guests:ALLOWED/OI|CI/0x00100021", "~]# echo USD(printf '0x%X' USD hex_value_1 | hex_value_2 | ...)", "~]# echo USD(printf '0x%X' USD(( 0x00100020 | 0x00100001 | 0x00100080 ))) 0x1000A1", "~]# smbcacls //server/example / -U \" DOMAIN \\administrator --add ACL:\"AD\\Domain Users\":ALLOWED/OI|CI/CHANGE", "ACL for SID principal_name not found", "~]# smbcacls //server/example / -U \" DOMAIN \\administrator --modify ACL:\"AD\\Domain Users\":ALLOWED/OI|CI/READ", "~]# smbcacls //server/example / -U \" DOMAIN \\administrator --delete ACL:\"AD\\Domain Users\":ALLOWED/OI|CI/READ", "~]# groupadd example", "~]# mkdir -p /var/lib/samba/usershares/", "~]# chgrp example /var/lib/samba/usershares/ ~]# chmod 1770 /var/lib/samba/usershares/", "usershare path = /var/lib/samba/usershares/", "usershare max shares = 100", "usershare prefix allow list = /data /srv", "~]# testparm", "~]# smbcontrol all reload-config", "~]USD net usershare add example /srv/samba/ \"\" \"AD\\Domain Users\":F,Everyone:R guest_ok=yes", "~]USD net usershare info -l [ share_1 ] path=/srv/samba/ comment= usershare_acl=Everyone:R, host_name \\user:F, guest_ok=y", "~]USD net usershare info -l share *_", "~]USD net usershare list -l share_1 share_2", "~]USD net usershare list -l share_*", "~]USD net usershare delete share_name", "-rw-r--r--. 1 root root 1024 1. Sep 10:00 file1.txt -rw-r-----. 1 nobody root 1024 1. Sep 10:00 file2.txt -rw-r-----. 1 root root 1024 1. Sep 10:00 file3.txt", "[global] map to guest = Bad User", "[global] guest account = user_name", "[example] guest ok = yes", "~]# testparm", "~]# smbcontrol all reload-config", "rpc_server:spoolss = external rpc_daemon:spoolssd = fork", "~]# testparm", "~]# systemctl restart smb", "~]# ps axf 30903 smbd 30912 \\_ smbd 30913 \\_ smbd 30914 \\_ smbd 30915 \\_ smbd", "rpc_server:spoolss = external rpc_daemon:spoolssd = fork", "[printers] comment = All Printers path = /var/tmp/ printable = yes create mask = 0600", "~]# testparm", "~]# firewall-cmd --permanent --add-service=samba ~]# firewall-cmd --reload", "~]# systemctl restart smb", "load printers = no", "[ Example-Printer ] path = /var/tmp/ printable = yes printer name = example", "~]# testparm", "~]# smbcontrol all reload-config", "~]# net rpc rights grant \"printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.", "~]# net rpc rights list privileges SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter administrator's password: SePrintOperatorPrivilege: BUILTIN\\Administrators DOMAIN \\printadmin", "[printUSD] path = /var/lib/samba/drivers/ read only = no write list = @printadmin force group = @printadmin create mask = 0664 directory mask = 2775", "spoolss: architecture = Windows x64", "~]# testparm", "~]# smbcontrol all reload-config", "~]# groupadd printadmin", "~]# net rpc rights grant \"printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.", "~]# semanage fcontext -a -t samba_share_t \"/var/lib/samba/drivers(/.*)?\" ~]# restorecon -Rv /var/lib/samba/drivers/", "~]# chgrp -R \"printadmin\" /var/lib/samba/drivers/ ~]# chmod -R 2775 /var/lib/samba/drivers/", "case sensitive = true default case = lower preserve case = no short preserve case = no", "~]# smbcontrol all reload-config", "[global] workgroup = domain_name security = ads passdb backend = tdbsam realm = AD_REALM", "[global] workgroup = domain_name security = user passdb backend = tdbsam", "~]# testparm", "~]# net ads join -U \" DOMAIN pass:quotes[ administrator ]\"", "~]# net rpc join -U \" DOMAIN pass:quotes[ administrator ]\"", "passwd: files winbind group: files winbind", "~]# systemctl enable winbind ~]# systemctl start winbind", "net rpc rights list -U \" DOMAIN pass:attributes[{blank}] administrator \" Enter DOMAIN pass:attributes[{blank}] administrator 's password: SeMachineAccountPrivilege Add machines to domain SeTakeOwnershipPrivilege Take ownership of files or other objects SeBackupPrivilege Back up files and directories SeRestorePrivilege Restore files and directories SeRemoteShutdownPrivilege Force shutdown from a remote system SePrintOperatorPrivilege Manage printers SeAddUsersPrivilege Add users and groups to the domain SeDiskOperatorPrivilege Manage disk shares SeSecurityPrivilege System security", "~]# net rpc rights grant \" DOMAIN \\printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.", "~]# net rpc rights remoke \" DOMAIN \\printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully revoked rights.", "~]# net rpc share list -U \" DOMAIN \\administrator\" -S example Enter DOMAIN \\administrator's password: IPCUSD share_1 share_2", "~]# net rpc share add example=\"C:\\example\" -U \" DOMAIN \\administrator\" -S server", "~]# net rpc share delete example -U \" DOMAIN \\administrator\" -S server", "~]# net ads user -U \" DOMAIN \\administrator\"", "~]# net rpc user -U \" DOMAIN \\administrator\"", "~]# net user add user password -U \" DOMAIN \\administrator\" User user added", "~]# net rpc shell -U DOMAIN \\administrator -S DC_or_PDC_name Talking to domain DOMAIN (S-1-5-21-1424831554-512457234-5642315751) net rpc> user edit disabled user no Set user 's disabled flag from [yes] to [no] net rpc> exit", "~]# net user delete user -U \" DOMAIN \\administrator\" User user deleted", "~]# rpcclient server_name -U \" DOMAIN pass:quotes[ administrator ]\" -c 'setdriver \" printer_name \" \" driver_name \"' Enter DOMAIN pass:quotes[ administrator ]s password: Successfully set printer_name to driver driver_name .", "~]# rpcclient server_name -U \" DOMAIN pass:quotes[ administrator ]\" -c 'netshareenum' Enter DOMAIN pass:quotes[ administrator ]s password: netname: Example_Share remark: path: C:\\srv\\samba\\example_share password: netname: Example_Printer remark: path: C:\\var\\spool\\samba password:", "~]# rpcclient server_name -U \" DOMAIN pass:quotes[ administrator ]\" -c 'enumdomusers' Enter DOMAIN pass:quotes[ administrator ]s password: user:[user1] rid:[0x3e8] user:[user2] rid:[0x3e9]", "~]# samba-regedit", "~]# smbclient -U \" DOMAIN\\user \" // server / example Enter domain \\user's password: Domain=[SERVER] OS=[Windows 6.1] Server=[Samba 4.6.2] smb: \\>", "smb: \\>", "smb: \\> help", "smb: \\> help command_name", "~]# smbclient -U \" DOMAIN pass:quotes[ user_name ]\" // server_name / share_name", "smb: \\> cd /example/", "smb: \\example\\> ls . D 0 Mon Sep 1 10:00:00 2017 .. D 0 Mon Sep 1 10:00:00 2017 example.txt N 1048576 Mon Sep 1 10:00:00 2017 9950208 blocks of size 1024. 8247144 blocks available", "smb: \\example\\> get example.txt getting file \\directory\\subdirectory\\example.txt of size 1048576 as example.txt (511975,0 KiloBytes/sec) (average 170666,7 KiloBytes/sec)", "smb: \\example\\> exit", "~]# smbclient -U DOMAIN pass:quotes[ user_name ] // server_name / share_name -c \"cd /example/ ; get example.txt ; exit\"", "~]# smbcontrol all reload-config", "[user@server ~]USD smbpasswd New SMB password: Retype new SMB password:", "smbpasswd -a user_name New SMB password: Retype new SMB password: Added user user_name .", "smbpasswd -e user_name Enabled user user_name .", "smbpasswd -x user_name Disabled user user_name .", "smbpasswd -x user_name Deleted user user_name .", "~]# smbstatus Samba version 4.6.2 PID Username Group Machine Protocol Version Encryption Signing ----------------------------------------------------------------------------------------------------------------------------- 963 DOMAIN \\administrator DOMAIN \\domain users client-pc (ipv4:192.0.2.1:57786) SMB3_02 - AES-128-CMAC Service pid Machine Connected at Encryption Signing: ------------------------------------------------------------------------------- example 969 192.0.2.1 Mo Sep 1 10:00:00 2017 CEST - AES-128-CMAC Locked files: Pid Uid DenyMode Access R/W Oplock SharePath Name Time ------------------------------------------------------------------------------------------------------------ 969 10000 DENY_WRITE 0x120089 RDONLY LEASE(RWH) /srv/samba/example file.txt Mon Sep 1 10:00:00 2017", "~]# smbtar -s server -x example -u user_name -p password -t /root/example.tar", "~]# wbinfo -u AD\\administrator AD\\guest", "~]# wbinfo -g AD\\domain computers AD\\domain admins AD\\domain users", "~]# wbinfo --name-to-sid=\"AD\\administrator\" S-1-5-21-1762709870-351891212-3141221786-500 SID_USER (1)", "~]# wbinfo --trusted-domains --verbose Domain Name DNS Domain Trust Type Transitive In Out BUILTIN None Yes Yes Yes server None Yes Yes Yes DOMAIN1 domain1.example.com None Yes Yes Yes DOMAIN2 domain2.example.com External No Yes Yes", "~]# man 5 smb.conf", "~]# systemctl start vsftpd.service", "~]# systemctl stop vsftpd.service", "~]# systemctl restart vsftpd.service", "~]# systemctl try-restart vsftpd.service", "~]# systemctl enable vsftpd.service Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.", "listen_address=N.N.N.N", "~]# systemctl start [email protected]", "~]# systemctl enable vsftpd.target Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.target to /usr/lib/systemd/system/vsftpd.target.", "~]# systemctl start vsftpd.target", "ssl_enable=YES ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO", "~]# systemctl restart vsftpd.service", "~]# chcon -R -t public_content_t /path/to/directory", "~]# setsebool -P allow_ftpd_anon_write=1", "{blank}", "{blank}", "{blank}", "{blank}", "{blank}", "{blank}", "install samba-client", "~]# firewall-config", "lpstat -o Charlie-60 twaugh 1024 Tue 08 Feb 2011 16:42:11 GMT Aaron-61 twaugh 1024 Tue 08 Feb 2011 16:42:44 GMT Ben-62 root 1024 Tue 08 Feb 2011 16:45:42 GMT" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-file_and_print_servers
Chapter 12. Adding storage resources for hybrid or Multicloud
Chapter 12. Adding storage resources for hybrid or Multicloud 12.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage Data Foundation . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Enter an Endpoint . This is optional. Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 12.2, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage Data Foundation . Click the Backing Store tab to view all the backing stores. 12.2. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. You must add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 12.2.1, "Creating an AWS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 12.2.2, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 12.2.3, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 12.2.4, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 12.2.5, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 12.3, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 12.2.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . <backingstore-secret-name> The name of the backingstore secret created in the step. Apply the following YAML for a specific backing store: <bucket-name> The existing AWS bucket name. <backingstore-secret-name> The name of the backingstore secret created in the step. 12.2.2. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For example, For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , and <IBM COS ENDPOINT> An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. <bucket-name> An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using an YAML Create a secret with the credentials: <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> The name of the backingstore secret. Apply the following YAML for a specific backing store: <bucket-name> an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <endpoint> A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> The name of the secret created in the step. 12.2.3. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> An AZURE account key and account name you created for this purpose. <blob container name> An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> A unique name of backingstore secret. Apply the following YAML for a specific backing store: <blob-container-name> An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> with the name of the secret created in the step. 12.2.4. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> Name of the backingstore. <PATH TO GCP PRIVATE KEY JSON FILE> A path to your GCP private key created for this purpose. <GCP bucket name> An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <GCP PRIVATE KEY ENCODED IN BASE64> Provide and encode your own GCP service account private key using Base64, and use the results for this attribute. <backingstore-secret-name> A unique name of the backingstore secret. Apply the following YAML for a specific backing store: <target bucket> An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration. <backingstore-secret-name> The name of the secret created in the step. 12.2.5. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Adding storage resources using the MCG command-line interface From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. Adding storage resources using YAML Apply the following YAML for a specific backing store: <backingstore_name > The name of the backingstore. <NUMBER OF VOLUMES> The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. <VOLUME SIZE> Required size in GB of each volume. <CPU REQUEST> Guaranteed amount of CPU requested in CPU unit m . <MEMORY REQUEST> Guaranteed amount of memory requested. <CPU LIMIT> Maximum amount of CPU that can be consumed in CPU unit m . <MEMORY LIMIT> Maximum amount of memory that can be consumed. <LOCAL STORAGE CLASS> The local storage class name, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: 12.3. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 12.4. Adding storage resources for hybrid and Multicloud using the user interface Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Storage Systems tab, select the storage system and then click Overview Object tab. Select the Multicloud Object Gateway link. Select the Resources tab in the left, highlighted below. From the list that populates, select Add Cloud Resource . Select Add new connection . Select the relevant native cloud provider or S3 compatible option and fill in the details. Select the newly created connection and map it to the existing bucket. Repeat these steps to create as many backing stores as needed. Note Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI. 12.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class (OBC). Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage Data Foundation . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage Data Foundation . Click the Bucket Class tab and search the new Bucket Class. 12.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage Data Foundation . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 12.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage Data Foundation . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, clear the name of the backing store. Click Save . 12.8. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. You can interact with objects in a namespace bucket using the S3 API. See S3 API endpoints for objects in namespace buckets for more information. Note A namespace bucket can only be used if its write target is available and functional. 12.8.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Red Hat OpenShift Data Foundation 4.6 onwards supports the following namespace bucket operations: ListObjectVersions ListObjects PutObject CopyObject ListParts CreateMultipartUpload CompleteMultipartUpload UploadPart UploadPartCopy AbortMultipartUpload GetObjectAcl GetObject HeadObject DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 12.8.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 12.8.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 12.8.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 12.8.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 12.8.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 12.8.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage Data Foundation . Click the Namespace Store tab to create a namespacestore resources to be used in the namespace bucket. Click Create namespace store . Enter a namespacestore name. Choose a provider. Choose a region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Choose a target bucket. Click Create . Verify that the namespacestore is in the Ready state. Repeat these steps until you have the desired amount of resources. Click the Bucket Class tab Create a new Bucket Class . Select the Namespace radio button. Enter a Bucket Class name. (Optional) Add description. Click . Choose a namespace policy type for your namespace bucket, and then click . Select the target resources. If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Click . Review your new bucket class, and then click Create Bucketclass . On the BucketClass page, verify that your newly created resource is in the Created phase. In the OpenShift Web Console, click Storage Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click Multicloud Object Gateway Buckets Namespace Buckets tab . Click Create Namespace Bucket . On the Choose Name tab, specify a name for the namespace bucket and click . On the Set Placement tab: Under Read Policy , select the checkbox for each namespace resource created in the earlier step that the namespace bucket should read data from. If the namespace policy type you are using is Multi , then Under Write Policy , specify which namespace resource the namespace bucket should write data to. Click . Click Create . Verification steps Verify that the namespace bucket is listed with a green check mark in the State column, the expected number of read resources, and the expected write resource name. 12.9. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Chapter 12, Adding storage resources for hybrid or Multicloud . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 12.9.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 12.9.2, "Creating bucket classes to mirror data using a YAML" 12.9.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 12.9.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Section 12.11, "Object Bucket Claim" . 12.10. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 12.10.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 12.10.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Section 11.2, "Accessing the Multicloud Object Gateway with your applications" Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . 12.10.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). Note To give access to certain buckets of MCG accounts, use AWS S3 bucket policies. For more information, see Using bucket policies in AWS documentation. 12.11. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 12.11.1, "Dynamic Object Bucket Claim" Section 12.11.2, "Creating an Object Bucket Claim using the command line interface" Section 12.11.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 12.11.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Note The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. To automate the use of the OBC add more lines to the YAML file. For example: The example is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 12.11.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . For example: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: For example: Run the following command to view the YAML file for the new OBC: For example: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: For example: The secret gives you the S3 access credentials. Run the following command to view the configuration map: For example: The configuration map contains the S3 endpoint information for your application. 12.11.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 12.11.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Bucket Claims Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page. 12.11.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . 12.11.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Buckets . Optonal: You can also navigate to the details page of a specific OBC, and click the Resource link to view the object buckets for that OBC. Select the object bucket of which you want to see the details. Once selected you are navigated to the Object Bucket Details page. 12.11.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . 12.12. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. Important Cache buckets are a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . AWS S3 IBM COS 12.12.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. In case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 12.12.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 12.13. Scaling Multicloud Object Gateway performance by adding endpoints The Multicloud Object Gateway performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The Multicloud Object Gateway resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service 12.13.1. Scaling the Multicloud Object Gateway with storage nodes Prerequisites A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG). A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods. Procedure Log in to OpenShift Web Console . From the MCG user interface, click Overview Add Storage Resources . In the window, click Deploy Kubernetes Pool . In the Create Pool step create the target pool for the future installed nodes. In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created. In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally. All nodes will be assigned to the pool you chose in the first step, and can be found under Resources Storage resources Resource name . 12.14. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG.
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES> --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> cpu: <CPU REQUEST> memory: <MEMORY REQUEST> limits: cpu: <CPU LIMIT> memory: <MEMORY LIMIT> storageClass: <LOCAL STORAGE CLASS> type: pv-pool", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"", "noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage", "get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"", "apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror", "noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror", "additionalConfig: bucketclass: mirror-to-aws", "{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }", "aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy BucketPolicy", "aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--default_resource='']", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io", "apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY", "oc apply -f <yaml.file>", "oc get cm <obc-name> -o yaml", "oc get secret <obc_name> -o yaml", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa obc create <obc-name> -n openshift-storage", "INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"", "oc get obc -n openshift-storage", "NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s", "oc get obc test21obc -o yaml -n openshift-storage", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound", "oc get -n openshift-storage secret test21obc -o yaml", "apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque", "oc get -n openshift-storage cm test21obc -o yaml", "apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/adding-storage-resources-for-hybrid-or-multicloud
Chapter 7. OVN-Kubernetes network plugin
Chapter 7. OVN-Kubernetes network plugin 7.1. About the OVN-Kubernetes network plugin The Red Hat OpenShift Service on AWS cluster uses a virtualized network for pod and service networks. Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for Red Hat OpenShift Service on AWS. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration. Note OVN-Kubernetes is the default networking solution for Red Hat OpenShift Service on AWS and single-node OpenShift deployments. OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to determine how packets travel through the network. For more information, see the Open Virtual Network website . OVN-Kubernetes is a series of daemons for OVS that translate virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device so that network administrators can configure, manage, and monitor the flow of network traffic. OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow . OVN supports distributed virtual routing, distributed logical switches, access control, Dynamic Host Configuration Protocol (DHCP), and DNS. OVN implements distributed virtual routing within logic flows that equate to open flows. For example, if you have a pod that sends out a DHCP request to the DHCP server on the network, a logic flow rule in the request helps the OVN-Kubernetes handle the packet so that the server can respond with gateway, DNS server, IP address, and other information. OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the network provider features: egress IPs, firewalls, routers, hybrid networking, IPSEC encryption, IPv6, network policy, network policy logs, hardware offloading, and multicast. 7.1.1. OVN-Kubernetes purpose The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin uses the following technologies: OVN to manage network traffic flows. Kubernetes network policy support and logs, including ingress and egress rules. The Generic Network Virtualization Encapsulation (Geneve) protocol, rather than Virtual Extensible LAN (VXLAN), to create an overlay network between nodes. The OVN-Kubernetes network plugin supports the following capabilities: Hybrid clusters that can run both Linux and Microsoft Windows workloads. This environment is known as hybrid networking . Offloading of network data processing from the host central processing unit (CPU) to compatible network cards and data processing units (DPUs). This is known as hardware offloading . IPv4-primary dual-stack networking on bare-metal, VMware vSphere, IBM Power(R), IBM Z(R), and Red Hat OpenStack Platform (RHOSP) platforms. IPv6 single-stack networking on RHOSP and bare metal platforms. IPv6-primary dual-stack networking for a cluster running on a bare-metal, a VMware vSphere, or an RHOSP platform. Egress firewall devices and egress IP addresses. Egress router devices that operate in redirect mode. IPsec encryption of intracluster communications. 7.1.2. OVN-Kubernetes IPv6 and dual-stack limitations The OVN-Kubernetes network plugin has the following limitations: For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output: I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4 The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway. For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output: I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface The only resolution is to reconfigure the host networking so that both IP families contain the default gateway. 7.1.3. Session affinity Session affinity is a feature that applies to Kubernetes Service objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client's IP address, see Session affinity . Stickiness timeout for session affinity The OVN-Kubernetes network plugin for Red Hat OpenShift Service on AWS calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds parameter. 7.2. Configuring an egress IP address As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace. Important Configuring egress IPs is not supported for ROSA with HCP clusters at this time. Important In an installer-provisioned infrastructure cluster, do not assign egress IP addresses to the infrastructure node that already hosts the ingress VIP. For more information, see the Red Hat Knowledgebase solution POD from the egress IP enabled namespace cannot access OCP route in an IPI cluster when the egress IP is assigned to the infra node that already hosts the ingress VIP . 7.2.1. Egress IP address architectural design and implementation The Red Hat OpenShift Service on AWS egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network. For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server. An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations. In ROSA with HCP clusters, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project. Important The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported. The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability. Example cloud.network.openshift.io/egress-ipconfig annotation on AWS cloud.network.openshift.io/egress-ipconfig: [ { "interface":"eni-078d267045138e436", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ipv4":14,"ipv6":15} } ] The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation. 7.2.1.1. Amazon Web Services (AWS) IP address capacity limits On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type 7.2.1.2. Assignment of egress IPs to pods To assign one or more egress IPs to a namespace or specific pods in a namespace, the following conditions must be satisfied: At least one node in your cluster must have the k8s.ovn.org/egress-assignable: "" label. An EgressIP object exists that defines one or more egress IP addresses to use as the source IP address for traffic leaving the cluster from pods in a namespace. Important If you create EgressIP objects prior to labeling any nodes in your cluster for egress IP assignment, Red Hat OpenShift Service on AWS might assign every egress IP address to the first node with the k8s.ovn.org/egress-assignable: "" label. To ensure that egress IP addresses are widely distributed across nodes in the cluster, always apply the label to the nodes you intent to host the egress IP addresses before creating any EgressIP objects. 7.2.1.3. Assignment of egress IPs to nodes When creating an EgressIP object, the following conditions apply to nodes that are labeled with the k8s.ovn.org/egress-assignable: "" label: An egress IP address is never assigned to more than one node at a time. An egress IP address is equally balanced between available nodes that can host the egress IP address. If the spec.EgressIPs array in an EgressIP object specifies more than one IP address, the following conditions apply: No node will ever host more than one of the specified IP addresses. Traffic is balanced roughly equally between the specified IP addresses for a given namespace. If a node becomes unavailable, any egress IP addresses assigned to it are automatically reassigned, subject to the previously described conditions. When a pod matches the selector for multiple EgressIP objects, there is no guarantee which of the egress IP addresses that are specified in the EgressIP objects is assigned as the egress IP address for the pod. Additionally, if an EgressIP object specifies multiple egress IP addresses, there is no guarantee which of the egress IP addresses might be used. For example, if a pod matches a selector for an EgressIP object with two egress IP addresses, 10.10.20.1 and 10.10.20.2 , either might be used for each TCP connection or UDP conversation. 7.2.1.4. Architectural diagram of an egress IP address configuration The following diagram depicts an egress IP address configuration. The diagram describes four pods in two different namespaces running on three nodes in a cluster. The nodes are assigned IP addresses from the 192.168.126.0/18 CIDR block on the host network. Both Node 1 and Node 3 are labeled with k8s.ovn.org/egress-assignable: "" and thus available for the assignment of egress IP addresses. The dashed lines in the diagram depict the traffic flow from pod1, pod2, and pod3 traveling through the pod network to egress the cluster from Node 1 and Node 3. When an external service receives traffic from any of the pods selected by the example EgressIP object, the source IP address is either 192.168.126.10 or 192.168.126.102 . The traffic is balanced roughly equally between these two nodes. The following resources from the diagram are illustrated in detail: Namespace objects The namespaces are defined in the following manifest: Namespace objects apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod EgressIP object The following EgressIP object describes a configuration that selects all pods in any namespace with the env label set to prod . The egress IP addresses for the selected pods are 192.168.126.10 and 192.168.126.102 . EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102 For the configuration in the example, Red Hat OpenShift Service on AWS assigns both egress IP addresses to the available nodes. The status field reflects whether and where the egress IP addresses are assigned. 7.2.2. EgressIP object The following YAML describes the API for the EgressIP object. The scope of the object is cluster-wide; it is not created in a namespace. apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 ... podSelector: 4 ... 1 The name for the EgressIPs object. 2 An array of one or more IP addresses. 3 One or more selectors for the namespaces to associate the egress IP addresses with. 4 Optional: One or more selectors for pods in the specified namespaces to associate egress IP addresses with. Applying these selectors allows for the selection of a subset of pods within a namespace. The following YAML describes the stanza for the namespace selector: Namespace selector stanza namespaceSelector: 1 matchLabels: <label_name>: <label_value> 1 One or more matching rules for namespaces. If more than one match rule is provided, all matching namespaces are selected. The following YAML describes the optional stanza for the pod selector: Pod selector stanza podSelector: 1 matchLabels: <label_name>: <label_value> 1 Optional: One or more matching rules for pods in the namespaces that match the specified namespaceSelector rules. If specified, only pods that match are selected. Others pods in the namespace are not selected. In the following example, the EgressIP object associates the 192.168.126.11 and 192.168.126.102 egress IP addresses with pods that have the app label set to web and are in the namespaces that have the env label set to prod : Example EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod In the following example, the EgressIP object associates the 192.168.127.30 and 192.168.127.40 egress IP addresses with any pods that do not have the environment label set to development : Example EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development 7.2.3. Labeling a node to host egress IP addresses You can apply the k8s.ovn.org/egress-assignable="" label to a node in your cluster so that Red Hat OpenShift Service on AWS can assign one or more egress IP addresses to the node. Prerequisites Install the ROSA CLI ( rosa ). Log in to the cluster as a cluster administrator. Procedure To label a node so that it can host one or more egress IP addresses, enter the following command: USD rosa edit machinepool <machinepool_name> --cluster=<cluster_name> --labels "k8s.ovn.org/egress-assignable=" Important This command replaces any exciting node labels on your machinepool. You should include any of the desired labels to the --labels field to ensure that your existing node labels persist. 7.2.4. steps Assigning egress IPs 7.2.5. Additional resources LabelSelector meta/v1 LabelSelectorRequirement meta/v1 7.3. Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin As a Red Hat OpenShift Service on AWS (ROSA) (classic architecture) cluster administrator, you can initiate the migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin and verify the migration status using the ROSA CLI. Some considerations before starting migration initiation are: The cluster version must be 4.16.24 and above. The migration process cannot be interrupted. Migrating back to the SDN network plugin is not possible. Cluster nodes will be rebooted during migration. There will be no impact to workloads that are resilient to node disruptions. Migration time can vary between several minutes and hours, depending on the cluster size and workload configurations. 7.3.1. Initiating migration using the ROSA CLI Warning You can only initiate migration on clusters that are version 4.16.24 and above. To initiate the migration, run the following command: USD rosa edit cluster -c <cluster_id> 1 --network-type OVNKubernetes --ovn-internal-subnets <configuration> 2 1 Replace <cluster_id> with the ID of the cluster you want to migrate to the OVN-Kubernetes network plugin. 2 Optional: Users can create key-value pairs to configure internal subnets using any or all of the options join, masquerade, transit along with a single CIDR per option. For example, --ovn-internal-subnets="join=0.0.0.0/24,transit=0.0.0.0/24,masquerade=0.0.0.0/24" . Important You cannot include the optional flag --ovn-internal-subnets in the command unless you define a value for the flag --network-type . Verification To check the status of the migration, run the following command: USD rosa describe cluster -c <cluster_id> 1 1 Replace <cluster_id> with the ID of the cluster to check the migration status.
[ "I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4", "I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface", "cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]", "apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 podSelector: 4", "namespaceSelector: 1 matchLabels: <label_name>: <label_value>", "podSelector: 1 matchLabels: <label_name>: <label_value>", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development", "rosa edit machinepool <machinepool_name> --cluster=<cluster_name> --labels \"k8s.ovn.org/egress-assignable=\"", "rosa edit cluster -c <cluster_id> 1 --network-type OVNKubernetes --ovn-internal-subnets <configuration> 2", "rosa describe cluster -c <cluster_id> 1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/networking/ovn-kubernetes-network-plugin
Release Notes for Red Hat build of Apache Camel for Spring Boot
Release Notes for Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel 4.4 What's new in Red Hat build of Apache Camel Red Hat build of Apache Camel Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/release_notes_for_red_hat_build_of_apache_camel_for_spring_boot/index
15.13. Configuring Changelog Encryption
15.13. Configuring Changelog Encryption To increase security, Directory Server supports encrypting the changelog. This section explains how to enable this feature. Prerequisites The server must have a certificate and key stored in the network security services (NSS) database. Therefor, enable TLS encryption on the server as described in Section 9.4.1, "Enabling TLS in Directory Server" . Procedure To enable changelog encryption: Except for the server on which you want to enable changelog encryption, stop all instances in the replication topology by entering the following command: On the server where you want to enable changelog encryption: Export the changelog, for example, to the /tmp/changelog.ldif file: Stop the instance: Add the following setting to the dn: cn=changelog5,cn=config entry in the /etc/dirsrv/slapd- instance_name /dse.ldif file: Start the instance: Import the changelog from the /tmp/changelog.ldif file: Start all instances on the other servers in the replication topology using the following command: Verification To verify that the changelog is encrypted, run the following steps on the server with the encrypted changelog: Make a change in the LDAP directory, such as updating an entry. Stop the instance: Enter the following command to display parts of the changelog: If the changelog is encrypted, you see only encrypted data. Start the instance: Additional Resources Section 15.15, "Exporting the Replication Changelog" Section 15.16, "Importing the Replication Changelog from an LDIF-formatted Changelog Dump"
[ "dsctl instance_name stop", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com replication dump-changelog -o /tmp/changelog.ldif", "dsctl instance_name stop", "nsslapd-encryptionalgorithm: AES", "dsctl instance_name start", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com replication restore-changelog from-ldif /tmp/changelog.ldif", "dsctl instance_name start", "dsctl stop instance_name", "dbscan -f /var/lib/dirsrv/slapd- instance_name /changelogdb/ replica_name _ replGen .db | tail -50", "dsctl start instance_name" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/configuring-changelog-encryption
Chapter 101. OpenTelemetry
Chapter 101. OpenTelemetry Since Camel 3.5 The OpenTelemetry component is used for tracing and timing the incoming and outgoing Camel messages using OpenTelemetry . Events (spans) are captured for incoming and outgoing messages that are sent to/from Camel. 101.1. Dependencies Add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-opentelemetry-starter</artifactId> </dependency> 101.2. Configuration The configuration properties for the OpenTelemetry tracer are: Option Default Description excludePatterns Sets exclude pattern(s) that will disable tracing for Camel messages that matches the pattern. The content is a Set<String> where the key is a pattern. The pattern uses the rules from Intercept. encoding false Sets whether the header keys need to be encoded (connector specific) or not. The value is a boolean. Dashes need for instances to be encoded for JMS property keys. 101.2.1. Using Camel OpenTelemetry Add the camel-opentelemetry component in your POM, in addition to any specific dependencies associated with the chosen OpenTelemetry compliant Tracer. To explicitly configure OpenTelemetry support, instantiate the OpenTelemetryTracer and initialize the camel context. You can optionally specify a Tracer , or alternatively it can be implicitly discovered using the Registry import io.opentelemetry.api.OpenTelemetry; import io.opentelemetry.api.trace.Tracer; OpenTelemetryTracer otelTracer = new OpenTelemetryTracer(); // By default it uses the DefaultTracer, but you can override it with a specific OpenTelemetry Tracer implementation. otelTracer.setTracer(...); // And then initialize the context otelTracer.init(camelContext); 101.3. Spring Boot Add the camel-opentelemetry-starter dependency, and then turn on the OpenTracing by annotating the main class with @CamelOpenTelemetry . The OpenTelemetryTracer is implicitly obtained from the camel context's Registry , unless a OpenTelemetryTracer bean has been defined by the application. 101.4. Using tracing strategy The camel-opentelemetry component starter allows you to trace not only the from/to Camel endpoints but also the Java Beans invoked from Camel processor/bean too. By default, Java beans invoked from Camel Processor or bean are categorized as another "span", that is, if user writes .to("bean:beanName?method=methodName") , it will be categorized under the same "span" with the from/to Camel endpoints. To categorize the Java beans invoked from Camel processor/bean under the same "span", you can use the org.apache.camel.opentelemetry.OpenTelemetryTracingStrategy class with the setTracingStrategy() option. import io.opentelemetry.api.OpenTelemetry; import io.opentelemetry.api.trace.Tracer; @Configuration public class CamelOtelConfiguration { public CamelOtelConfiguration(CamelContext camelContext, OpenTelemetryTracer tracer) { var strategy = new OpenTelemetryTracingStrategy(tracer); tracer.setTracingStrategy(strategy); camelContext.getCamelContextExtension().addInterceptStrategy(strategy); } } In case of too much data, you can filter the data by camel.opentelemetry.exclude-patterns property. camel: opentelemetry: exclude-patterns: - ## Set some ID here to filter ## 101.5. Java Agent Download the Java agent . This package includes the instrumentation agent as well as instrumentations for all supported libraries and all available data exporters. The package provides a completely automatic, out-of-the-box experience. Enable the instrumentation agent using the -javaagent flag to the JVM. java -javaagent:path/to/opentelemetry-javaagent.jar \ -jar myapp.jar By default, the OpenTelemetry Java agent uses OTLP exporter configured to send data to OpenTelemetry collector at http://localhost:4317 . Configuration parameters are passed as Java system properties ( -D flags) or as environment variables. See Configuring the agent and OpenTelemetry auto-configuration for the full list of configuration items. For example: java -javaagent:path/to/opentelemetry-javaagent.jar \ -Dotel.service.name=your-service-name \ -Dotel.traces.exporter=jaeger \ -jar myapp.jar 101.6. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.opentelemetry.encoding Activate or deactivate the dash encoding in headers (required by JMS) for messaging. Boolean camel.opentelemetry.exclude-patterns Sets exclude pattern(s) that will disable the tracing for the Camel messages that matches the pattern. Set 101.7. MDC Logging When MDC Logging is enabled for the active Camel context, the Trace ID and Span ID are added and removed from the MDC for each route, where the keys are trace_id and span_id , respectively.
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-opentelemetry-starter</artifactId> </dependency>", "import io.opentelemetry.api.OpenTelemetry; import io.opentelemetry.api.trace.Tracer; OpenTelemetryTracer otelTracer = new OpenTelemetryTracer(); // By default it uses the DefaultTracer, but you can override it with a specific OpenTelemetry Tracer implementation. otelTracer.setTracer(...); // And then initialize the context otelTracer.init(camelContext);", "import io.opentelemetry.api.OpenTelemetry; import io.opentelemetry.api.trace.Tracer; @Configuration public class CamelOtelConfiguration { public CamelOtelConfiguration(CamelContext camelContext, OpenTelemetryTracer tracer) { var strategy = new OpenTelemetryTracingStrategy(tracer); tracer.setTracingStrategy(strategy); camelContext.getCamelContextExtension().addInterceptStrategy(strategy); } }", "camel: opentelemetry: exclude-patterns: - ## Set some ID here to filter ##", "java -javaagent:path/to/opentelemetry-javaagent.jar -jar myapp.jar", "java -javaagent:path/to/opentelemetry-javaagent.jar -Dotel.service.name=your-service-name -Dotel.traces.exporter=jaeger -jar myapp.jar" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-opentelemetry-component-starter
Chapter 8. Pod security authentication and authorization
Chapter 8. Pod security authentication and authorization Pod security admission is an implementation of the Kubernetes pod security standards . Use pod security admission to restrict the behavior of pods. 8.1. Security context constraint synchronization with pod security standards MicroShift includes Kubernetes pod security admission . In addition to the global pod security admission control configuration, a controller exists that applies pod security admission control warn and audit labels to namespaces according to the security context constraint (SCC) permissions of the service accounts that are in a given namespace. Important Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. You can enable pod security admission synchronization on other namespaces as necessary. If an Operator is installed in a user-created openshift-* namespace, synchronization is turned on by default after a cluster service version (CSV) is created in the namespace. The controller examines ServiceAccount object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn and audit labels are set to the most privileged pod security profile found in the namespace to prevent warnings and audit logging as pods are created. Namespace labeling is based on consideration of namespace-local service account privileges. Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling. 8.1.1. Viewing security context constraints in a namespace You can view the security context constraints (SCC) permissions in a given namespace. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To view the security context constraints in your namespace, run the following command: oc get --show-labels namespace <namespace> 8.2. Controlling pod security admission synchronization You can enable automatic pod security admission synchronization for most namespaces. System defaults are not enforced when the security.openshift.io/scc.podSecurityLabelSync field is empty or set to false . You must set the label to true for synchronization to occur. Important Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. These namespaces include: default kube-node-lease kube-system kube-public openshift All system-created namespaces that are prefixed with openshift- , except for openshift-operators By default, all namespaces that have an openshift- prefix are not synchronized. You can enable synchronization for any user-created openshift-* namespaces. You cannot enable synchronization for any system-created openshift-* namespaces, except for openshift-operators . If an Operator is installed in a user-created openshift-* namespace, synchronization is turned on by default after a cluster service version (CSV) is created in the namespace. The synchronized label inherits the permissions of the service accounts in the namespace. Procedure To enable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to true . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true Note You can use the --overwrite flag to reverse the effects of the pod security label synchronization in a namespace.
[ "get --show-labels namespace <namespace>", "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/running_applications/authentication-with-microshift
Chapter 32. ExternalConfigurationReference schema reference
Chapter 32. ExternalConfigurationReference schema reference Used in: ExternalLogging , JmxPrometheusExporterMetrics Property Property type Description configMapKeyRef ConfigMapKeySelector Reference to the key in the ConfigMap containing the configuration.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-ExternalConfigurationReference-reference
Upgrading SAP environments from RHEL 8 to RHEL 9
Upgrading SAP environments from RHEL 8 to RHEL 9 Red Hat Enterprise Linux for SAP Solutions 9 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/upgrading_sap_environments_from_rhel_8_to_rhel_9/index
Chapter 8. New Model Wizards
Chapter 8. New Model Wizards 8.1. Launch New Model Wizards Models are the primary resource used by the Teiid Designer . Creating models can be accomplished by either directly importing existing metadata or by creating them using one of several New Model wizard options. This section describes these wizards in detail. Use one of the following options to launch the New Model Wizard. Click File > New > Teiid Metadata Model action . Select a project or folder in the Model Explorer View and choose the same action in the right-click menu. Select the New button on the main toolbar and select the Teiid Metadata Model action . Note Model names are required to be unique within Teiid Designer . When specifying model names in new model wizards and dialogs error messages will be presented and you will prevented from entering an existing name.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/chap-new_model_wizards
4.2. APC Power Switch over SNMP
4.2. APC Power Switch over SNMP Table 4.3, "APC Power Switch over SNMP" lists the fence device parameters used by fence_apc_snmp , the fence agent for APC that logs into the SNP device by means of the SNMP protocol. Table 4.3. APC Power Switch over SNMP luci Field cluster.conf Attribute Description Name name A name for the APC device connected to the cluster into which the fence daemon logs by means of the SNMP protocol. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP port udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port The port. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Figure 4.2, "APC Power Switch over SNMP" shows the configuration screen for adding an APC Power Switch fence device. Figure 4.2. APC Power Switch over SNMP The following is the cluster.conf entry for the fence_apc_snmp device:
[ "<fencedevice> <fencedevice agent=\"fence_apc_snmp\" community=\"private\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"apcpwsnmptst1\" passwd=\"password123\" power_wait=\"60\" snmp_priv_passwd=\"password123\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-apc-snmp-CA
Chapter 1. Introduction to OpenShift Data Foundation
Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any worklaod only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/planning_your_deployment/introduction-to-openshift-data-foundation-4_rhodf
8.108. ksh
8.108. ksh 8.108.1. RHBA-2014:1381 - ksh bug fix update Updated ksh packages that fix several bugs are now available for Red Hat Enterprise Linux 6. KornShell (ksh) is a Unix shell developed by AT&T Bell Laboratories, which is backward-compatible with the Bourne shell (Bash) and includes many features of the C shell. The most recent version is KSH-93. KornShell complies with the POSIX.2 standard (IEEE Std 1003.2-1992). Bug Fixes BZ# 825520 Due to a race condition in the job list code, the ksh shell could terminate unexpectedly with a segmentation fault when the user had run custom scripts on their system. The race condition has been fixed, and ksh now works as expected when the users run custom scripts. BZ# 1117316 Due to a regression bug, a command substitution containing a pipe could return a non-zero exit code even though the command did not fail. A patch has been provided to fix this bug, and these command substitutions now return correct exit codes. BZ# 1105138 Previously, if a running function was unset by another function in a ksh script, ksh terminated unexpectedly with a segmentation fault. With this update, the ksh code skips resetting of the "running" flag if a function is unset, and ksh no longer crashes in the described scenario. BZ# 1036470 Previously, using the typeset command in a function in ksh resulted in a memory leak. This bug has been fixed and ksh no longer leaks memory when using the typeset command in a function. BZ# 1036802 Previously after upgrading ksh from version "ksh-20100621-19" to "ksh-20120801-10", the standard error output (stderr) got disconnected from the terminal, and the trace output in debug mode thus became invisible. As a consequence, the debugging of scripts on ksh was not always possible. This bug has been fixed and stderr now provides the correct output. BZ# 1066589 Previously, a substitution command failed to execute in ksh if the standard input (stdin), the standard output (stdout), or standard error (stderr) were closed in a certain manner. As a consequence, reading a file using command substitution did not work correctly and the substituted text failed to display under some circumstances. A patch has been applied to address this bug, and command substitution now works as expected. BZ# 1112306 Prior to this update, the compiler optimization dropped parts of the ksh job locking mechanism from the binary code. As a consequence, ksh could terminate unexpectedly with a segmentation fault after it received the SIGCHLD signal. This update ensures that the compiler does not drop parts of the ksh job locking mechanism and ksh works as expected. BZ# 1062296 When running a command that output a lot of data, and then setting a variable from that output, the ksh could become unresponsive. To fix this problem, the combination of I/O redirection and synchronization mechanism has been changed. Now, ksh no longer hangs in this case and commands with large data output complete successfully. BZ# 1036931 Previously, the ksh syntax analyzer did not parse command substitutions inside of here-documents correctly, which led to syntax error messages being reported on the syntactically correct code. A patch has been provided to fix this bug, and ksh now interprets substitutes as intended. BZ# 1116508 Previously, ksh could skip setting of an exit code from the last command of a function. As a result, the function could sometimes return a wrong exit code. This update ensures that ksh always uses the exit code from the last command of a function, and functions in ksh now return the correct exit codes as expected. BZ# 1070350 When rounding off a number smaller than 1, ksh incorrectly truncated too many decimals. As a consequence, numbers from the interval between 0.5 and 0.999 were incorrectly rounded to zero. This updated version of ksh ensures that all numbers are now rounded off correctly. BZ# 1078698 Previously, ksh did not handle the brace expansion option correctly and ignored it in most cases. As a result, it was not possible to turn the brace expansion off and braces in file names had to be always escaped to prevent their expansion. The brace expansion code has been updated to take no action when the brace expansion option is turned off, and brace expansion in ksh can now be turned off and on as needed. BZ# 1133582 Previously, ksh did not handle the reuse of low file descriptor numbers when they were not used for the standard input, output, or error output. As a consequence, when any of stdin, stdout, or stderr was closed and its file descriptor was reused in command substitution, the output from that substitution was empty. With this update, ksh has been updated to no longer reuse "low file descriptors" for command substitution. Command substitution in ksh now works correctly even if any of stdin, stdout or stderr is closed. BZ# 1075635 Previously, ksh did not mask exit codes and sometimes returned a number that was too high and could be later interpreted as termination by a signal. Consequently, if ksh was started from the "su" utility and it exited with a high-number exit code, "su" incorrectly generated a core dump. To prevent confusion of a parent process, ksh has been updated to mask exit codes when terminating. BZ# 1047506 Previously, after forking a process, ksh did not clear the argument list of the process properly. Consequently, when listing processes using the ps tool, the arguments field of the forked process could contain some old arguments. The code has been modified to always clear the unused space of the argument string, and the ps tool now prints correct arguments. BZ# 1102627 Previously, ksh did not verify whether it had the execute permission for a given directory before attempting to change into it. Consequently, ksh always assumed it was operating inside that directory, even though the attempt to access the directory was in fact unsuccessful. With this update, ksh checks for the execute permission and reports an error as expected if the permission is missing. BZ# 1023109 Ksh could set a wrong process group when running a script in monitor mode. As a consequence, when such a script attempted to read an input, the ksh process stopped the script. With this update, ksh has been fixed to use the correct process group and the script executes as expected in the described scenario. Users of ksh are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ksh
Chapter 1. Using Data Grid as a Spring Cache provider
Chapter 1. Using Data Grid as a Spring Cache provider Add Data Grid dependencies to your application and use Spring Cache annotations to store data in embedded or remote caches. 1.1. Setting up Spring caching with Data Grid Add the Data Grid dependencies to your Spring application project. If you use remote caches in a Data Grid Server deployment, you should also configure your Hot Rod client properties. Important Data Grid supports Spring version 5 and version 6. Be aware that Spring 6 requires Java 17. The examples in this document include artifacts for the latest version of Spring. If you want to use Spring 5 use: Remote caches: infinispan-spring5-remote Embedded caches: infinispan-spring5-embedded Procedure Add Data Grid and the Spring integration module to your pom.xml . Remote caches: infinispan-spring6-remote Embedded caches: infinispan-spring6-embedded Tip Spring Boot users can add the following artifacts instead of the infinispan-spring6-embedded : For Spring Boot 3 add infinispan-spring-boot3-starter-embedded For Spring Boot 2.x add infinispan-spring-boot-starter-embedded Configure your Hot Rod client to connect to your Data Grid Server deployment in the hotrod-client.properties file. infinispan.client.hotrod.server_list = 127.0.0.1:11222 infinispan.client.hotrod.auth_username=admin infinispan.client.hotrod.auth_password=changeme Spring Cache dependencies Remote caches <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-spring6-remote</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>USD{version.spring}</version> </dependency> </dependencies> Embedded caches <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-spring6-embedded</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>USD{version.spring}</version> </dependency> </dependencies> Additional resources Configuring Hot Rod Client connections 1.2. Using Data Grid as a Spring Cache provider Add the @EnableCaching annotation to one of your configuration classes and then add the @Cacheable and @CacheEvict annotations to use remote or embedded caches. Prerequisites Add the Data Grid dependencies to your application project. Create the required remote caches and configure Hot Rod client properties if you use a Data Grid Server deployment. Procedure Enable cache annotations in your application context in one of the following ways: Declarative <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cache="http://www.springframework.org/schema/cache" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd"> <cache:annotation-driven /> </beans> Programmatic @EnableCaching @Configuration public class Config { } Annotate methods with @Cacheable to cache return values. Tip To reference entries in the cache directly, you must include the key attribute. Annotate methods with @CacheEvict to remove old entries from the cache. Additional resources Spring Framework - Default Key Generation 1.3. Spring Cache annotations The @Cacheable and @CacheEvict annotations add cache capabilities to methods. @Cacheable Stores return values in a cache. @CacheEvict Controls cache size by removing old entries. @Cacheable Taking Book objects as an example, if you want to cache each instance after loading it from a database with a method such as BookDao#findBook(Integer bookId) , you could add the @Cacheable annotation as follows: @Transactional @Cacheable(value = "books", key = "#bookId") public Book findBook(Integer bookId) {...} With the preceding example, when findBook(Integer bookId) returns a Book instance it gets stored in the cache named books . @CacheEvict With the @CacheEvict annotation, you can specify if you want to evict the entire books cache or only the entries that match a specific #bookId. Entire cache eviction Annotate the deleteAllBookEntries() method with @CacheEvict and add the allEntries parameter as follows: @Transactional @CacheEvict (value="books", key = "#bookId", allEntries = true) public void deleteAllBookEntries() {...} Entry based eviction Annotate the deleteBook(Integer bookId) method with @CacheEvict and specify the key associated to the entry as follows: @Transactional @CacheEvict (value="books", key = "#bookId") public void deleteBook(Integer bookId) {...} 1.4. Configuring timeouts for cache operations The Data Grid Spring Cache provider defaults to blocking behaviour when performing read and write operations. Cache operations are synchronous and do not time out. If necessary you can configure a maximum time to wait for operations to complete before they time out. Procedure Configure the following timeout properties in the context XML for your application on either SpringEmbeddedCacheManagerFactoryBean or SpringRemoteCacheManagerFactoryBean . For remote caches, you can also add these properties to the hotrod-client.properties file. Property Description infinispan.spring.operation.read.timeout Specifies the time, in milliseconds, to wait for read operations to complete. The default is 0 which means unlimited wait time. infinispan.spring.operation.write.timeout Specifies the time, in milliseconds, to wait for write operations to complete. The default is 0 which means unlimited wait time. The following example shows the timeout properties in the context XML for SpringRemoteCacheManagerFactoryBean : <bean id="springRemoteCacheManagerConfiguredUsingConfigurationProperties" class="org.infinispan.spring.remote.provider.SpringRemoteCacheManagerFactoryBean"> <property name="configurationProperties"> <props> <prop key="infinispan.spring.operation.read.timeout">500</prop> <prop key="infinispan.spring.operation.write.timeout">700</prop> </props> </property> </bean>
[ "infinispan.client.hotrod.server_list = 127.0.0.1:11222 infinispan.client.hotrod.auth_username=admin infinispan.client.hotrod.auth_password=changeme", "<dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-spring6-remote</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>USD{version.spring}</version> </dependency> </dependencies>", "<dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-spring6-embedded</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>USD{version.spring}</version> </dependency> </dependencies>", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cache=\"http://www.springframework.org/schema/cache\" xmlns:p=\"http://www.springframework.org/schema/p\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd\"> <cache:annotation-driven /> </beans>", "@EnableCaching @Configuration public class Config { }", "@Transactional @Cacheable(value = \"books\", key = \"#bookId\") public Book findBook(Integer bookId) {...}", "@Transactional @CacheEvict (value=\"books\", key = \"#bookId\", allEntries = true) public void deleteAllBookEntries() {...}", "@Transactional @CacheEvict (value=\"books\", key = \"#bookId\") public void deleteBook(Integer bookId) {...}", "<bean id=\"springRemoteCacheManagerConfiguredUsingConfigurationProperties\" class=\"org.infinispan.spring.remote.provider.SpringRemoteCacheManagerFactoryBean\"> <property name=\"configurationProperties\"> <props> <prop key=\"infinispan.spring.operation.read.timeout\">500</prop> <prop key=\"infinispan.spring.operation.write.timeout\">700</prop> </props> </property> </bean>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_data_grid_with_spring/spring-cache-provider
Chapter 6. Setting up the relational database
Chapter 6. Setting up the relational database Red Hat Single Sign-On comes with its own embedded Java-based relational database called H2. This is the default database that Red Hat Single Sign-On will use to persist data and really only exists so that you can run the authentication server by default. The H2 database is intended only for example purposes. It is not a supported database, so it is not tested for database migration. We highly recommend that you replace it with a more production ready external database. The H2 database is not very viable in high concurrency situations and should not be used in a cluster either. The purpose of this chapter is to show you how to connect Red Hat Single Sign-On to a more mature database. Red Hat Single Sign-On uses two layered technologies to persist its relational data. The bottom layered technology is JDBC. JDBC is a Java API that is used to connect to a RDBMS. There are different JDBC drivers per database type that are provided by your database vendor. This chapter discusses how to configure Red Hat Single Sign-On to use one of these vendor-specific drivers. The top layered technology for persistence is Hibernate JPA. This is an object to relational mapping API that maps Java Objects to relational data. Most deployments of Red Hat Single Sign-On will never have to touch the configuration aspects of Hibernate, but we will discuss how that is done if you run into that rare circumstance. Note Datasource configuration is covered much more thoroughly in the datasource configuration chapter in the JBoss EAP Configuration Guide . 6.1. Database setup checklist Following are the steps you perform to get an RDBMS configured for Red Hat Single Sign-On. Locate and download a JDBC driver for your database Package the driver JAR into a module and install this module into the server Declare the JDBC driver in the configuration profile of the server Modify the datasource configuration to use your database's JDBC driver Modify the datasource configuration to define the connection parameters to your database This chapter will use PostgresSQL for all its examples. Other databases follow the same steps for installation. 6.2. Packaging the JDBC driver Find and download the JDBC driver JAR for your RDBMS. Before you can use this driver, you must package it up into a module and install it into the server. Modules define JARs that are loaded into the Red Hat Single Sign-On classpath and the dependencies those JARs have on other modules. Procedure Create a directory structure to hold your module definition within the ... /modules/ directory of your Red Hat Single Sign-On distribution. The convention is use the Java package name of the JDBC driver for the name of the directory structure. For PostgreSQL, create the directory org/postgresql/main . Copy your database driver JAR into this directory and create an empty module.xml file within it too. Module Directory Open up the module.xml file and create the following XML: Module XML <?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.3" name="org.postgresql"> <resources> <resource-root path="postgresql-VERSION.jar"/> </resources> <dependencies> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> The module name should match the directory structure of your module. So, org/postgresql maps to org.postgresql . The resource-root path attribute should specify the JAR filename of the driver. The rest are just the normal dependencies that any JDBC driver JAR would have. 6.3. Declaring and loading the JDBC driver You declare your JDBC into your deployment profile so that it loads and becomes available when the server boots up. Prerequisites You have packaged the JDBC driver. Procedure Declare your JDBC driver by editing one of these files based on your deployment mode: For standalone mode, edit ... /standalone/configuration/standalone.xml . For standalone clustering mode, edit ... /standalone/configuration/standalone-ha.xml . For domain mode, edit ... /domain/configuration/domain.xml . In domain mode, make sure you edit the profile you are using: either auth-server-standalone or auth-server-clustered Within the profile, search for the drivers XML block within the datasources subsystem. You should see a pre-defined driver declared for the H2 JDBC driver. This is where you'll declare the JDBC driver for your external database. JDBC Drivers <subsystem xmlns="urn:jboss:domain:datasources:6.0"> <datasources> ... <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem> Within the drivers XML block, declare an additional JDBC driver. Assign any name to this driver. Specify the module attribute which points to the module package that you created earlier for the driver JAR. Specify the driver's Java class. Here's an example of installing a PostgreSQL driver that lives in the module example defined earlier in this chapter. Declare Your JDBC Drivers <subsystem xmlns="urn:jboss:domain:datasources:6.0"> <datasources> ... <drivers> <driver name="postgresql" module="org.postgresql"> <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class> </driver> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem> 6.4. Modifying the Red Hat Single Sign-On datasource You modify the existing datasource configuration that Red Hat Single Sign-On uses to connect it to your new external database. You'll do this within the same configuration file and XML block that you registered your JDBC driver in. Here's an example that sets up the connection to your new database: Declare Your JDBC Drivers <subsystem xmlns="urn:jboss:domain:datasources:6.0"> <datasources> ... <datasource jndi-name="java:jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true" use-java-context="true"> <connection-url>jdbc:postgresql://localhost/keycloak</connection-url> <driver>postgresql</driver> <pool> <max-pool-size>20</max-pool-size> </pool> <security> <user-name>William</user-name> <password>password</password> </security> </datasource> ... </datasources> </subsystem> Prerequisites You have already declared your JDBC driver. Procedure Search for the datasource definition for KeycloakDS . You'll first need to modify the connection-url . The documentation for your vendor's JDBC implementation should specify the format for this connection URL value. Define the driver you will use. This is the logical name of the JDBC driver you declared in the section of this chapter. It is expensive to open a new connection to a database every time you want to perform a transaction. To compensate, the datasource implementation maintains a pool of open connections. The max-pool-size specifies the maximum number of connections it will pool. You may want to change the value of this depending on the load of your system. Define the database username and password that is needed to connect to the database. This step is necessary for at least PostgreSQL. You may be concerned that these credentials are in clear text in the example. Methods exist to obfuscate these credentials, but these methods are beyond the scope of this guide. Note For more information about datasource features, see the datasource configuration chapter in the JBoss EAP Configuration Guide . 6.5. Database Configuration The configuration for this component is found in the standalone.xml , standalone-ha.xml , or domain.xml file in your distribution. The location of this file depends on your operating mode . Database Config <subsystem xmlns="urn:jboss:domain:keycloak-server:1.2"> ... <spi name="connectionsJpa"> <provider name="default" enabled="true"> <properties> <property name="dataSource" value="java:jboss/datasources/KeycloakDS"/> <property name="initializeEmpty" value="false"/> <property name="migrationStrategy" value="manual"/> <property name="migrationExport" value="USD{jboss.home.dir}/keycloak-database-update.sql"/> </properties> </provider> </spi> ... </subsystem> Possible configuration options are: dataSource JNDI name of the dataSource jta boolean property to specify if datasource is JTA capable driverDialect Value of database dialect. In most cases you don't need to specify this property as dialect will be autodetected by Hibernate. initializeEmpty Initialize database if empty. If set to false the database has to be manually initialized. If you want to manually initialize the database set migrationStrategy to manual which will create a file with SQL commands to initialize the database. Defaults to true. migrationStrategy Strategy to use to migrate database. Valid values are update , manual and validate . Update will automatically migrate the database schema. Manual will export the required changes to a file with SQL commands that you can manually execute on the database. Validate will simply check if the database is up-to-date. migrationExport Path for where to write manual database initialization/migration file. showSql Specify whether Hibernate should show all SQL commands in the console (false by default). This is very verbose! formatSql Specify whether Hibernate should format SQL commands (true by default) globalStatsInterval Will log global statistics from Hibernate about executed DB queries and other things. Statistics are always reported to server log at specified interval (in seconds) and are cleared after each report. schema Specify the database schema to use Note These configuration switches and more are described in the JBoss EAP Development Guide . 6.6. Unicode considerations for databases Database schema in Red Hat Single Sign-On only accounts for Unicode strings in the following special fields: Realms: display name, HTML display name, localization texts (keys and values) Federation Providers: display name Users: username, given name, last name, attribute names and values Groups: name, attribute names and values Roles: name Descriptions of objects Otherwise, characters are limited to those contained in database encoding which is often 8-bit. However, for some database systems, it is possible to enable UTF-8 encoding of Unicode characters and use full Unicode character set in all text fields. Often, this is counterbalanced by shorter maximum length of the strings than in case of 8-bit encodings. Some of the databases require special settings to database and/or JDBC driver to be able to handle Unicode characters. Please find the settings for your database below. Note that if a database is listed here, it can still work properly provided it handles UTF-8 encoding properly both on the level of database and JDBC driver. Technically, the key criterion for Unicode support for all fields is whether the database allows setting of Unicode character set for VARCHAR and CHAR fields. If yes, there is a high chance that Unicode will be plausible, usually at the expense of field length. If it only supports Unicode in NVARCHAR and NCHAR fields, Unicode support for all text fields is unlikely as Keycloak schema uses VARCHAR and CHAR fields extensively. 6.6.1. Oracle database Unicode characters are properly handled provided the database was created with Unicode support in VARCHAR and CHAR fields (e.g. by using AL32UTF8 character set as the database character set). No special settings is needed for JDBC driver. If the database character set is not Unicode, then to use Unicode characters in the special fields, the JDBC driver needs to be configured with the connection property oracle.jdbc.defaultNChar set to true . It might be wise, though not strictly necessary, to also set the oracle.jdbc.convertNcharLiterals connection property to true . These properties can be set either as system properties or as connection properties. Please note that setting oracle.jdbc.defaultNChar may have negative impact on performance. For details, please refer to Oracle JDBC driver configuration documentation. 6.6.2. Microsoft SQL Server database Unicode characters are properly handled only for the special fields. No special settings of JDBC driver or database is necessary. 6.6.3. MySQL database Unicode characters are properly handled provided the database was created with Unicode support in VARCHAR and CHAR fields in the CREATE DATABASE command (e.g. by using utf8 character set as the default database character set in MySQL 5.5. Please note that utf8mb4 character set does not work due to different storage requirements to utf8 character set [1] ). Note that in this case, length restriction to non-special fields does not apply because columns are created to accommodate given amount of characters, not bytes. If the database default character set does not allow storing Unicode, only the special fields allow storing Unicode values. At the side of JDBC driver settings, it is necessary to add a connection property characterEncoding=UTF-8 to the JDBC connection settings. 6.6.4. PostgreSQL database Unicode is supported when the database character set is UTF8 . In that case, Unicode characters can be used in any field, there is no reduction of field length for non-special fields. No special settings of JDBC driver is necessary. The character set of a PostgreSQL database is determined at the time it is created. You can determine the default character set for a PostgreSQL cluster with the SQL command show server_encoding; If the default character set is not UTF 8, then you can create the database with UTF8 as its character set like this: create database keycloak with encoding 'UTF8'; [1] Tracked as https://issues.redhat.com/browse/KEYCLOAK-3873
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <module xmlns=\"urn:jboss:module:1.3\" name=\"org.postgresql\"> <resources> <resource-root path=\"postgresql-VERSION.jar\"/> </resources> <dependencies> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "<subsystem xmlns=\"urn:jboss:domain:datasources:6.0\"> <datasources> <drivers> <driver name=\"h2\" module=\"com.h2database.h2\"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:datasources:6.0\"> <datasources> <drivers> <driver name=\"postgresql\" module=\"org.postgresql\"> <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class> </driver> <driver name=\"h2\" module=\"com.h2database.h2\"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:datasources:6.0\"> <datasources> <datasource jndi-name=\"java:jboss/datasources/KeycloakDS\" pool-name=\"KeycloakDS\" enabled=\"true\" use-java-context=\"true\"> <connection-url>jdbc:postgresql://localhost/keycloak</connection-url> <driver>postgresql</driver> <pool> <max-pool-size>20</max-pool-size> </pool> <security> <user-name>William</user-name> <password>password</password> </security> </datasource> </datasources> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:keycloak-server:1.2\"> <spi name=\"connectionsJpa\"> <provider name=\"default\" enabled=\"true\"> <properties> <property name=\"dataSource\" value=\"java:jboss/datasources/KeycloakDS\"/> <property name=\"initializeEmpty\" value=\"false\"/> <property name=\"migrationStrategy\" value=\"manual\"/> <property name=\"migrationExport\" value=\"USD{jboss.home.dir}/keycloak-database-update.sql\"/> </properties> </provider> </spi> </subsystem>", "show server_encoding;", "create database keycloak with encoding 'UTF8';" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_installation_and_configuration_guide/database
Chapter 3. Application Migration
Chapter 3. Application Migration You can migrate your applications created for an older release of JBoss EAP XP to JBoss EAP XP 5.0. 3.1. MicroProfile application migration MicroProfile 6.1 aligns with the Jakarta EE 10 Core Profile and introduces MicroProfile Telemetry, replacing MicroProfile OpenTracing. MicroProfile 6.1 includes updates to all the major MicroProfile specifications. The following specifications might include API incompatible changes for MicroProfile 6.1: MicroProfile Config MicroProfile Fault Tolerance MicroProfile Health MicroProfile OpenAPI You must update your applications that use these specifications to the latest Jakarta EE 10 specifications. You can update your applications to MicroProfile 6.1 by choosing one of the following methods: Adding the MicroProfile 6.1 dependency to your project's pom.xml file. Using the JBoss EAP XP BOMs to import supported artifacts to the JBoss EAP XP dependency management of your project's pom.xml file. Additional resources MicroProfile 6.1 3.2. Migrate from MicroProfile OpenTracing to OpenTelemetry Tracing MicroProfile OpenTracing is not supported in JBoss EAP XP 5.0 and is replaced by OpenTelemetry tracing. To replace MicroProfile OpenTracing with OpenTelemetry Tracing, follow these steps: Replace the dependency org.eclipse.microprofile.opentracing:microprofile-opentracing-api with io.opentelemetry:opentelemetry-api and io.opentelemetry:opentelemetry-context . Replace the usage of the org.eclipse.microprofile.opentracing Java package with the io.opentelemetry Java package. Such replacement might involve additional changes to classes and methods. Additional resources OpenTelemetry Tracing in JBoss EAP 3.3. Migrate from MicroProfile Metrics to Micrometer MicroProfile Metrics is not supported in JBoss EAP XP 5.0 and is replaced by Micrometer. To replace MicroProfile Metrics with Micrometer, follow these steps: Replace the dependency org.eclipse.microprofile.metrics:microprofile-metrics-api with io.micrometer:micrometer-core . Replace the usage of the org.eclipse.microprofile.metric Java package with the io.micrometer Java package. Such replacement might involve additional changes to classes and methods.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/jboss_eap_xp_5.0_upgrade_and_migration_guide/application_migration_default
Chapter 3. Deploying an overcloud with the Bare Metal Provisioning service
Chapter 3. Deploying an overcloud with the Bare Metal Provisioning service To deploy an overcloud with the Bare Metal Provisioning service (ironic), you must create and configure the bare metal network, and configure the overcloud to enable bare metal provisioning. Create the bare metal network. You can reuse the provisioning network interface on the Controller nodes to create a flat network, or you can create a custom network: Configuring the default flat network Configuring a custom IPv4 provisioning network Configuring a custom IPv6 provisioning network Configure the overcloud to enable bare metal provisioning: Configuring the overcloud to enable bare metal provisioning Note If you use Open Virtual Network (OVN), the Bare Metal Provisioning service is supported only with the DHCP agent defined in the ironic-overcloud.yaml file, neutron-dhcp-agent . The built-in DHCP server on OVN cannot provision bare metal nodes or serve DHCP for the provisioning networks. To enable iPXE chain loading you must set the --dhcp-match tag in dnsmasq, which is not supported by the OVN DHCP server. Prerequisites Your environment meets the minimum requirements. For more information, see Requirements for bare metal provisioning . 3.1. Configuring the default flat network To use the default flat bare metal network, you reuse the provisioning network interface on the Controller nodes to create a bridge for the Bare Metal Provisioning service (ironic). Procedure Log in to the undercloud as the stack user. Source the stackrc file: Modify the /home/stack/templates/nic-configs/controller.yaml file to reuse the provisioning network interface on the Controller nodes, eth1 , to create a bridge for the bare metal network: Note You cannot VLAN tag the bare metal network when you create it by reusing the provisioning network. Add br-baremetal to the NeutronBridgeMappings parameter in your network-environment.yaml file: Add baremetal to the list of networks specified by the NeutronFlatNetworks parameter in your network-environment.yaml file: steps Configuring the overcloud to enable bare metal provisioning 3.2. Configuring a custom IPv4 provisioning network Create a custom IPv4 provisioning network to provision and deploy the overcloud over IPv4. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Copy the network_data.yaml file to your environment file directory: Add a new network for overcloud provisioning to your network_data.yaml file: Replace <ipv4_subnet_address> with the IPv4 address of your IPv4 subnet. Replace <ipv4_mask> with the IPv4 network mask for your IPv4 subnet. Replace <ipv4_start_address> and <ipv4_end_address> with the IPv4 range that you want to use for address allocation. Configure IronicApiNetwork and IronicNetwork in your ServiceNetMap configuration to use the new IPv4 provisioning network: Add the new network as an interface to your local Controller NIC configuration file: Copy the roles_data.yaml file to your environment file directory: Add the new network for the controller to your roles_data.yaml file: Include the IronicInspector service in the Ironic role in your roles_data.yaml file, if not already present: steps Configuring the overcloud to enable bare metal provisioning 3.3. Configuring a custom IPv6 provisioning network Create a custom IPv6 provisioning network to provision and deploy the overcloud over IPv6. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Copy the network_data.yaml file to your environment file directory: Add a new IPv6 network for overcloud provisioning to your network_data.yaml file: Replace <ipv6_subnet_address> with the IPv6 address of your IPv6 subnet. Replace <ipv6_prefix> with the IPv6 network prefix for your IPv6 subnet. Replace <ipv6_start_address> and <ipv6_end_address> with the IPv6 range that you want to use for address allocation. Replace <ipv6_gw_address> with the IPv6 address of your gateway. Create a new file network_environment_overrides.yaml in your environment file directory: Configure IronicApiNetwork and IronicNetwork in your network_environment_overrides.yaml file to use the new IPv6 provisioning network: Set the IronicIpVersion parameter to 6 : Enable the RabbitIPv6 , MysqlIPv6 , and RedisIPv6 parameters: Add the new network as an interface to your local Controller NIC configuration file: Copy the roles_data.yaml file to your environment file directory: Add the new network for the Controller role to your roles_data.yaml file: Include the IronicInspector service in the Ironic role in your roles_data.yaml file, if not already present: steps Configuring the overcloud to enable bare metal provisioning 3.4. Configuring the overcloud to enable bare metal provisioning Use one of the default templates located in the /usr/share/openstack-tripleo-heat-templates/environments/services directory to deploy the overcloud with the Bare Metal Provisioning service (ironic) enabled: For deployments that use OVS: ironic.yaml For deployments that use OVN: ironic-overcloud.yaml You can create a local environment file to override the default configuration, as required by your deployment. Procedure Create an environment file in your local directory to configure the Bare Metal Provisioning service for your deployment, for example, ironic-overrides.yaml . Optional: Configure the type of cleaning that is performed on the bare metal machines before and between provisioning: Replace <cleaning_type> with one of the following values: full : (Default) Performs a full clean. metadata : Clean only the partition table. This type of cleaning substantially speeds up the cleaning process. However, because the deployment is less secure in a multi-tenant environment, use this option only in a trusted tenant environment. Optional: Add additional drivers to the default drivers: Replace [additional_driver_1] , and optionally all drivers up to [additional_driver_n] , with the additional drivers you want to enable. To enable bare metal introspection, add the following configuration to your local Bare Metal Provisioning service environment file, ironic-overrides.yaml : Replace <ip_range> with the IP ranges for your environments, for example, 192.168.0.100,192.168.0.120 . Replace <ip_address>:<port> with the IP address and port of the web server that hosts the IPA kernel and ramdisk. To use the same images that you use on the undercloud, set the IP address to the undercloud IP address, and the port to 8088 . If you omit this parameter, you must include alternatives on each Controller node. Replace <baremetal_interface> with the bare metal network interface, for example, br-baremetal . Add your new role and custom environment files to the stack with your other environment files and deploy the overcloud: Replace <default_ironic_template> with either ironic.yaml or ironic-overcloud.yaml , depending on the Networking service mechanism driver for your deployment. Note The order that you pass your environment files to the openstack overcloud deploy command is important, as the configuration in the later files takes precedence. Therefore, your environment file that enables and configures bare metal provisioning on your overcloud must be passed to the command after any network configuration files. 3.5. Testing the Bare Metal Provisioning service You can use the OpenStack Integration Test Suite to validate your Red Hat OpenStack deployment. For more information, see the OpenStack Integration Test Suite Guide . Additional verification methods for the Bare Metal Provisioning service: Configure the shell to access Identity as the administrative user: Check that the nova-compute service is running on the Controller nodes: If you changed the default ironic drivers, ensure that the required drivers are enabled: Ensure that the ironic endpoints are listed: 3.6. Additional resources Deployment command options in the Director Installation and Usage guide IPv6 Networking for the Overcloud Bare Metal (ironic) Parameters in the Overcloud Parameters guide
[ "[stack@director ~]USD source ~/stackrc", "network_config: - type: ovs_bridge name: br-baremetal use_dhcp: false members: - type: interface name: eth1 addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}", "parameter_defaults: NeutronBridgeMappings: datacentre:br-ex,baremetal:br-baremetal", "parameter_defaults: NeutronBridgeMappings: datacentre:br-ex,baremetal:br-baremetal NeutronFlatNetworks: datacentre,baremetal", "source ~/stackrc", "(undercloud) [stack@host01 ~]USD cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/templates/network_data.yaml", "custom network for overcloud provisioning - name: OcProvisioning name_lower: oc_provisioning vip: true vlan: 205 ip_subnet: '<ipv4_subnet_address>/<ipv4_mask>' allocation_pools: [{'start': '<ipv4_start_address>', 'end': '<ipv4_end_address>'}]", "ServiceNetMap: IronicApiNetwork: oc_provisioning IronicNetwork: oc_provisioning", "network_config: - type: vlan vlan_id: get_param: OcProvisioningNetworkVlanID addresses: - ip_netmask: get_param: OcProvisioningIpSubnet", "(undercloud) [stack@host01 ~]USD cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml /home/stack/templates/roles_data.yaml", "networks: OcProvisioning: subnet: oc_provisioning_subnet", "ServicesDefault: OS::TripleO::Services::IronicInspector", "[stack@director ~]USD source ~/stackrc", "(undercloud) [stack@host01 ~]USD cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/templates/network_data.yaml", "custom network for IPv6 overcloud provisioning - name: OcProvisioningIPv6 vip: true name_lower: oc_provisioning_ipv6 vlan: 10 ipv6: true ipv6_subnet: '<ipv6_subnet_address>/<ipv6_prefix>' ipv6_allocation_pools: [{'start': '<ipv6_start_address>', 'end': '<ipv6_end_address>'}] gateway_ipv6: '<ipv6_gw_address>'", "touch /home/stack/templates/network_environment_overrides.yaml", "ServiceNetMap: IronicApiNetwork: oc_provisioning_ipv6 IronicNetwork: oc_provisioning_ipv6", "parameter_defaults: IronicIpVersion: 6", "parameter_defaults: RabbitIPv6: True MysqlIPv6: True RedisIPv6: True", "network_config: - type: vlan vlan_id: get_param: OcProvisioningIPv6NetworkVlanID addresses: - ip_netmask: get_param: OcProvisioningIPv6IpSubnet", "(undercloud) [stack@host01 ~]USD cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml /home/stack/templates/roles_data.yaml", "networks: - OcProvisioningIPv6", "ServicesDefault: OS::TripleO::Services::IronicInspector", "parameter_defaults: IronicCleaningDiskErase: <cleaning_type>", "parameter_defaults: IronicEnabledHardwareTypes: ipmi,idrac,ilo,[additional_driver_1],...,[additional_driver_n]", "parameter_defaults: IronicInspectorSubnets: - ip_range: <ip_range> IPAImageURLs: '[\"http://<ip_address>:<port>/agent.kernel\", \"http://<ip_address>:<port>/agent.ramdisk\"]' IronicInspectorInterface: '<baremetal_interface>'", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/node-info.yaml -r /home/stack/templates/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/<default_ironic_template> -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml -e /home/stack/templates/network_environment_overrides.yaml -n /home/stack/templates/network_data.yaml -e /home/stack/templates/ironic-overrides.yaml", "source ~/overcloudrc", "openstack compute service list -c Binary -c Host -c Status", "openstack baremetal driver list", "openstack catalog list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/bare_metal_provisioning/assembly_deploying-an-overcloud-with-the-bare-metal-provisioning-service
40.5.3. Using opannotate
40.5.3. Using opannotate The opannotate tool tries to match the samples for particular instructions to the corresponding lines in the source code. The resulting files generated should have the samples for the lines at the left. It also puts in a comment at the beginning of each function listing the total samples for the function. For this utility to work, the executable must be compiled with GCC's -g option. By default, Red Hat Enterprise Linux packages are not compiled with this option. The general syntax for opannotate is as follows: The directory containing the source code and the executable to be analyzed must be specified. Refer to the opannotate man page for a list of additional command line options.
[ "opannotate --search-dirs <src-dir> --source <executable>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/analyzing_the_data-using_opannotate
7.6. SSSD Client-side Views
7.6. SSSD Client-side Views SSSD enables you to create a client-side view to specify new values for POSIX user or group attributes. The view takes effect only on the local machine where the overrides are configured. You can configure client-side overrides for all id_provider values, except ipa . If you are using the ipa provider, define ID views centrally in IdM. See the corresponding section in the Linux Domain Identity, Authentication, and Policy Guide . For more information, see the Potential Negative Impact on SSSD Performance section in the Linux Domain Identity, Authentication, and Policy Guide . Note After creating the first override using the sss_override user-add , sss_override group-add , or sss_override user-import command, restart SSSD for the changes to take effect: 7.6.1. Defining a Different Attribute Value for a User Account As an administrator, you configured an existing host to use accounts from LDAP. However, a user's new ID in LDAP is different from the user's ID on the local system. You can configure a client-side view to override the UID instead of changing the permissions on existing files. To override the UID of the user account with UID 6666 : Optional . Display the current UID of the user account: Override the account's UID with 6666 : Wait until the in-memory cache has been expired. To expire it manually: Verify that the new UID is applied: Optional . Display the overrides for the user: For a list of attributes you can override, list the command-line options by adding --help to the command: 7.6.2. Listing All Overrides on a Host As an administrator, you want to list all user and group overrides on a host to verify that the correct attributes are overridden. To list all user overrides: To list all group overrides: 7.6.3. Removing a Local Override You previously created an override for the shell of the user account, that is defined in the global LDAP directory. To remove the override for the account, run: The changes take effect immediately. To remove an override for a group, run: Note When you remove overrides for a user or group, all overrides for this object are removed. 7.6.4. Exporting and Importing Local Views Client-side views are stored in the local SSSD cache. You can export user and group views from the cache to a file to create a backup. For example, when you remove the SSSD cache, you can restore the views later again. To back up user and group views: To restore user and group view:
[ "systemctl restart sssd", "id user uid= 1241400014 (user_name) gid=1241400014(user_name) Groups=1241400014(user_name)", "sss_override user-add user -u 6666", "sss_cache --users", "id user uid= 6666 (user_name) gid=1241400014(user_name) Groups=1241400014(user_name)", "sss_override user-show user [email protected]::6666:::::", "sss_override user-add --help", "sss_override user-find [email protected]::8000::::/bin/zsh: [email protected]::8001::::/bin/bash:", "sss_override group-find [email protected]::7000 [email protected]::7001", "sss_override user-del user", "sss_override group-del group", "sss_override user-export /var/lib/sss/backup/sssd_user_overrides.bak sss_override group-export /var/lib/sss/backup/sssd_group_overrides.bak", "sss_override user-import /var/lib/sss/backup/sssd_user_overrides.bak sss_override group-import /var/lib/sss/backup/sssd_group_overrides.bak" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/sssd-client-side-views
Chapter 17. Upgrading to OpenShift Data Foundation
Chapter 17. Upgrading to OpenShift Data Foundation 17.1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.16 and 4.17, or between z-stream updates like 4.16.0 and 4.16.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. Extended Update Support (EUS) EUS to EUS upgrade in OpenShift Data Foundation is sequential and it is aligned with OpenShift upgrade. For more information, see Performing an EUS-to-EUS update and EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager . For EUS upgrade of OpenShift Container Platform and OpenShift Data Foundation, make sure that OpenShift Data Foundation is upgraded along with OpenShift Container Platform and compatibility between OpenShift Data Foundation and OpenShift Container Platform is always maintained. Example workflow of EUS upgrade: Pause the worker machine pools. Update OpenShift <4.y> OpenShift <4.y+1>. Update OpenShift Data Foundation <4.y> OpenShift Data Foundation <4.y+1>. Update OpenShift <4.y+1> OpenShift <4.y+2>. Update to OpenShift Data Foundation <4.y+2>. Unpause the worker machine pools. Note You can update to ODF <4.y+2> either before or after worker machine pools are unpaused. Important When you update OpenShift Data Foundation in external mode, make sure that the Red Had Ceph Storage and OpenShift Data Foundation versions are compatible. For more information about supported Red Had Ceph Storage version in external mode, refer to Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Provide the required OpenShift Data Foundation version in the checker to see the supported Red Had Ceph version corresponding to the version in use. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.14 to 4.15 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 17.2. Updating Red Hat OpenShift Data Foundation 4.16 to 4.17 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. You must upgrade Red Hat Ceph Storage along with OpenShift Data Foundation to get new feature support, security fixes, and other bug fixes. As there is no dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. For more information about RHCS releases, see the knowledgebase solution, solution . Important Upgrading to 4.17 directly from any version older than 4.16 is not supported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Prerequisite relevant only for OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS) Add another entry in the trust policy for noobaa-core account as follows: Log into AWS web console where the AWS role resides using http://console.aws.amazon.com/ . Enter the IAM management tool and click Roles . Find the name of the role created for AWS STS to support Multicloud Object Gateway (MCG) authentication using the following command in OpenShift CLI: Search for the role name that you obtained from the step in the tool and click on the role name. Under the role summary, click Trust relationships . In the Trusted entities tab, click Edit trust policy on the right. Under the "Action": "sts:AssumeRoleWithWebIdentity" field, there are two fields to enable access for two NooBaa service accounts noobaa and noobaa-endpoint . Add another entry for the core pod's new service account name, system:serviceaccount:openshift-storage:noobaa-core . Click Update policy at the bottom right of the page. The update might take about 5 minutes to get in place. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.17 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Note After upgrading, if your cluster has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . 17.3. Updating Red Hat OpenShift Data Foundation 4.17.x to 4.17.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . 17.4. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it.
[ "oc get deployment noobaa-operator -o yaml -n openshift-storage | grep ROLEARN -A1 value: arn:aws:iam::123456789101:role/your-role-name-here" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/upgrading-your-cluster_osp
Chapter 3. Monitoring a Ceph storage cluster
Chapter 3. Monitoring a Ceph storage cluster As a storage administrator, you can monitor the overall health of the Red Hat Ceph Storage cluster, along with monitoring the health of the individual components of Ceph. Once you have a running Red Hat Ceph Storage cluster, you might begin monitoring the storage cluster to ensure that the Ceph Monitor and Ceph OSD daemons are running, at a high-level. Ceph storage cluster clients connect to a Ceph Monitor and receive the latest version of the storage cluster map before they can read and write data to the Ceph pools within the storage cluster. So the monitor cluster must have agreement on the state of the cluster before Ceph clients can read and write data. Ceph OSDs must peer the placement groups on the primary OSD with the copies of the placement groups on secondary OSDs. If faults arise, peering will reflect something other than the active + clean state. 3.1. High-level monitoring of a Ceph storage cluster As a storage administrator, you can monitor the health of the Ceph daemons to ensure that they are up and running. High level monitoring also involves checking the storage cluster capacity to ensure that the storage cluster does not exceed its full ratio . The Red Hat Ceph Storage Dashboard is the most common way to conduct high-level monitoring. However, you can also use the command-line interface, the Ceph admin socket or the Ceph API to monitor the storage cluster. 3.1.1. Checking the storage cluster health After you start the Ceph storage cluster, and before you start reading or writing data, check the storage cluster's health first. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example You can check on the health of the Ceph storage cluster with the following command: Example You can check the status of the Ceph storage cluster by running ceph status command: Example The output provides the following information: Cluster ID Cluster health status The monitor map epoch and the status of the monitor quorum. The OSD map epoch and the status of OSDs. The status of Ceph Managers. The status of Object Gateways. The placement group map version. The number of placement groups and pools. The notional amount of data stored and the number of objects stored. The total amount of data stored. The IO client operations. An update on the upgrade process if the cluster is upgrading. Upon starting the Ceph cluster, you will likely encounter a health warning such as HEALTH_WARN XXX num placement groups stale . Wait a few moments and check it again. When the storage cluster is ready, ceph health should return a message such as HEALTH_OK . At that point, it is okay to begin using the cluster. 3.1.2. Watching storage cluster events You can watch events that are happening with the Ceph storage cluster using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To watch the cluster's ongoing events, run the following command: Example 3.1.3. How Ceph calculates data usage The used value reflects the actual amount of raw storage used. The xxx GB / xxx GB value means the amount available, the lesser of the two numbers, of the overall storage capacity of the cluster. The notional number reflects the size of the stored data before it is replicated, cloned or snapshotted. Therefore, the amount of data actually stored typically exceeds the notional amount stored, because Ceph creates replicas of the data and may also use storage capacity for cloning and snapshotting. 3.1.4. Understanding the storage clusters usage stats To check a cluster's data usage and data distribution among pools, use the df option. It is similar to the Linux df command. The SIZE / AVAIL / RAW USED in the ceph df and ceph status command output are different if some OSDs are marked OUT of the cluster compared to when all OSDs are IN . The SIZE / AVAIL / RAW USED is calculated from sum of SIZE (osd disk size), RAW USE (total used space on disk), and AVAIL of all OSDs which are in IN state. You can see the total of SIZE / AVAIL / RAW USED for all OSDs in ceph osd df tree command output. Example The ceph df detail command gives more details about other pool statistics such as quota objects, quota bytes, used compression, and under compression. The RAW STORAGE section of the output provides an overview of the amount of storage the storage cluster manages for data. CLASS: The class of OSD device. SIZE: The amount of storage capacity managed by the storage cluster. In the above example, if the SIZE is 90 GiB, it is the total size without the replication factor, which is three by default. The total available capacity with the replication factor is 90 GiB/3 = 30 GiB. Based on the full ratio, which is 0.85% by default, the maximum available space is 30 GiB * 0.85 = 25.5 GiB AVAIL: The amount of free space available in the storage cluster. In the above example, if the SIZE is 90 GiB and the USED space is 6 GiB, then the AVAIL space is 84 GiB. The total available space with the replication factor, which is three by default, is 84 GiB/3 = 28 GiB USED: The amount of raw storage consumed by user data. In the above example, 100 MiB is the total space available after considering the replication factor. The actual available size is 33 MiB. RAW USED: The amount of raw storage consumed by user data, internal overhead, or reserved capacity. % RAW USED: The percentage of RAW USED . Use this number in conjunction with the full ratio and near full ratio to ensure that you are not reaching the storage cluster's capacity. The POOLS section of the output provides a list of pools and the notional usage of each pool. The output from this section DOES NOT reflect replicas, clones or snapshots. For example, if you store an object with 1 MB of data, the notional usage will be 1 MB, but the actual usage may be 3 MB or more depending on the number of replicas for example, size = 3 , clones and snapshots. POOL: The name of the pool. ID: The pool ID. STORED: The actual amount of data stored by the user in the pool. This value changes based on the raw usage data based on (k+M)/K values, number of object copies, and the number of objects degraded at the time of pool stats calculation. OBJECTS: The notional number of objects stored per pool. It is STORED size * replication factor. USED: The notional amount of data stored in kilobytes, unless the number appends M for megabytes or G for gigabytes. %USED: The notional percentage of storage used per pool. MAX AVAIL: An estimate of the notional amount of data that can be written to this pool. It is the amount of data that can be used before the first OSD becomes full. It considers the projected distribution of data across disks from the CRUSH map and uses the first OSD to fill up as the target. In the above example, MAX AVAIL is 153.85 MB without considering the replication factor, which is three by default. See the Red Hat Knowledgebase article titled ceph df MAX AVAIL is incorrect for simple replicated pool to calculate the value of MAX AVAIL . QUOTA OBJECTS: The number of quota objects. QUOTA BYTES: The number of bytes in the quota objects. USED COMPR: The amount of space allocated for compressed data including his includes compressed data, allocation, replication and erasure coding overhead. UNDER COMPR: The amount of data passed through compression and beneficial enough to be stored in a compressed form. Note The numbers in the POOLS section are notional. They are not inclusive of the number of replicas, snapshots or clones. As a result, the sum of the USED and %USED amounts will not add up to the RAW USED and %RAW USED amounts in the GLOBAL section of the output. Note The MAX AVAIL value is a complicated function of the replication or erasure code used, the CRUSH rule that maps storage to devices, the utilization of those devices, and the configured mon_osd_full_ratio . Additional Resources See How Ceph calculates data usage for details. See Understanding the OSD usage stats for details. 3.1.5. Understanding the OSD usage stats Use the ceph osd df command to view OSD utilization stats. Example ID: The name of the OSD. CLASS: The type of devices the OSD uses. WEIGHT: The weight of the OSD in the CRUSH map. REWEIGHT: The default reweight value. SIZE: The overall storage capacity of the OSD. USE: The OSD capacity. DATA: The amount of OSD capacity that is used by user data. OMAP: An estimate value of the bluefs storage that is being used to store object map ( omap ) data (key value pairs stored in rocksdb ). META: The bluefs space allocated, or the value set in the bluestore_bluefs_min parameter, whichever is larger, for internal metadata which is calculated as the total space allocated in bluefs minus the estimated omap data size. AVAIL: The amount of free space available on the OSD. %USE: The notional percentage of storage used by the OSD VAR: The variation above or below average utilization. PGS: The number of placement groups in the OSD. MIN/MAX VAR: The minimum and maximum variation across all OSDs. Additional Resources See How Ceph calculates data usage for details. See Understanding the OSD usage stats for details. See CRUSH Weights in Red Hat Ceph Storage Storage Strategies Guide for details. 3.1.6. Checking the storage cluster status You can check the status of the Red Hat Ceph Storage cluster from the command-line interface. The status sub command or the -s argument will display the current status of the storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To check a storage cluster's status, execute the following: Example Or Example In interactive mode, type ceph and press Enter : Example 3.1.7. Checking the Ceph Monitor status If the storage cluster has multiple Ceph Monitors, which is a requirement for a production Red Hat Ceph Storage cluster, then you can check the Ceph Monitor quorum status after starting the storage cluster, and before doing any reading or writing of data. A quorum must be present when multiple Ceph Monitors are running. Check the Ceph Monitor status periodically to ensure that they are running. If there is a problem with the Ceph Monitor, that prevents an agreement on the state of the storage cluster, the fault can prevent Ceph clients from reading and writing data. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To display the Ceph Monitor map, execute the following: Example or Example To check the quorum status for the storage cluster, execute the following: Ceph returns the quorum status. Example 3.1.8. Using the Ceph administration socket Use the administration socket to interact with a given daemon directly by using a UNIX socket file. For example, the socket enables you to: List the Ceph configuration at runtime Set configuration values at runtime directly without relying on Monitors. This is useful when Monitors are down . Dump historic operations Dump the operation priority queue state Dump operations without rebooting Dump performance counters In addition, using the socket is helpful when troubleshooting problems related to Ceph Monitors or OSDs. Regardless, if the daemon is not running, a following error is returned when attempting to use the administration socket: Important The administration socket is only available while a daemon is running. When you shut down the daemon properly, the administration socket is removed. However, if the daemon terminates unexpectedly, the administration socket might persist. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To use the socket: Syntax Replace: MONITOR_ID of the daemon COMMAND with the command to run. Use help to list the available commands for a given daemon. To view the status of a Ceph Monitor: Example Example Alternatively, specify the Ceph daemon by using its socket file: Syntax To view the status of a Ceph OSD named osd.0 on the specific host: Example Note You can use help instead of status for the various options that are available for the specific daemon. To list all socket files for the Ceph processes: Example Additional Resources See the Red Hat Ceph Storage Troubleshooting Guide for more information. 3.1.9. Understanding the Ceph OSD status A Ceph OSD's status is either in the storage cluster, or out of the storage cluster. It is either up and running, or it is down and not running. If a Ceph OSD is up , it can be either in the storage cluster, where data can be read and written, or it is out of the storage cluster. If it was in the storage cluster and recently moved out of the storage cluster, Ceph starts migrating placement groups to other Ceph OSDs. If a Ceph OSD is out of the storage cluster, CRUSH will not assign placement groups to the Ceph OSD. If a Ceph OSD is down , it should also be out . Note If a Ceph OSD is down and in , there is a problem, and the storage cluster will not be in a healthy state. If you execute a command such as ceph health , ceph -s or ceph -w , you might notice that the storage cluster does not always echo back HEALTH OK . Do not panic. With respect to Ceph OSDs, you can expect that the storage cluster will NOT echo HEALTH OK in a few expected circumstances: You have not started the storage cluster yet, and it is not responding. You have just started or restarted the storage cluster, and it is not ready yet, because the placement groups are getting created and the Ceph OSDs are in the process of peering. You just added or removed a Ceph OSD. You just modified the storage cluster map. An important aspect of monitoring Ceph OSDs is to ensure that when the storage cluster is up and running that all Ceph OSDs that are in the storage cluster are up and running, too. To see if all OSDs are running, execute: Example or Example The result should tell you the map epoch, eNNNN , the total number of OSDs, x , how many, y , are up , and how many, z , are in : If the number of Ceph OSDs that are in the storage cluster are more than the number of Ceph OSDs that are up . Execute the following command to identify the ceph-osd daemons that are not running: Example Tip The ability to search through a well-designed CRUSH hierarchy can help you troubleshoot the storage cluster by identifying the physical locations faster. If a Ceph OSD is down , connect to the node and start it. You can use Red Hat Storage Console to restart the Ceph OSD daemon, or you can use the command line. Syntax Example Additional Resources See the Red Hat Ceph Storage Dashboard Guide for more details. 3.2. Low-level monitoring of a Ceph storage cluster As a storage administrator, you can monitor the health of a Red Hat Ceph Storage cluster from a low-level perspective. Low-level monitoring typically involves ensuring that Ceph OSDs are peering properly. When peering faults occur, placement groups operate in a degraded state. This degraded state can be the result of many different things, such as hardware failure, a hung or crashed Ceph daemon, network latency, or a complete site outage. 3.2.1. Monitoring Placement Group Sets When CRUSH assigns placement groups to Ceph OSDs, it looks at the number of replicas for the pool and assigns the placement group to Ceph OSDs such that each replica of the placement group gets assigned to a different Ceph OSD. For example, if the pool requires three replicas of a placement group, CRUSH may assign them to osd.1 , osd.2 and osd.3 respectively. CRUSH actually seeks a pseudo-random placement that will take into account failure domains you set in the CRUSH map, so you will rarely see placement groups assigned to nearest neighbor Ceph OSDs in a large cluster. We refer to the set of Ceph OSDs that should contain the replicas of a particular placement group as the Acting Set . In some cases, an OSD in the Acting Set is down or otherwise not able to service requests for objects in the placement group. When these situations arise, do not panic. Common examples include: You added or removed an OSD. Then, CRUSH reassigned the placement group to other Ceph OSDs, thereby changing the composition of the acting set and spawning the migration of data with a "backfill" process. A Ceph OSD was down , was restarted and is now recovering . A Ceph OSD in the acting set is down or unable to service requests, and another Ceph OSD has temporarily assumed its duties. Ceph processes a client request using the Up Set , which is the set of Ceph OSDs that actually handle the requests. In most cases, the up set and the Acting Set are virtually identical. When they are not, it can indicate that Ceph is migrating data, an Ceph OSD is recovering, or that there is a problem, that is, Ceph usually echoes a HEALTH WARN state with a "stuck stale" message in such scenarios. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To retrieve a list of placement groups: Example View which Ceph OSDs are in the Acting Set or in the Up Set for a given placement group: Syntax Example Note If the Up Set and Acting Set do not match, this may be an indicator that the storage cluster rebalancing itself or of a potential problem with the storage cluster. 3.2.2. Ceph OSD peering Before you can write data to a placement group, it must be in an active state, and it should be in a clean state. For Ceph to determine the current state of a placement group, the primary OSD of the placement group that is, the first OSD in the acting set, peers with the secondary and tertiary OSDs to establish agreement on the current state of the placement group. Assuming a pool with three replicas of the PG. Figure 3.1. Peering 3.2.3. Placement Group States If you execute a command such as ceph health , ceph -s or ceph -w , you may notice that the cluster does not always echo back HEALTH OK . After you check to see if the OSDs are running, you should also check placement group states. You should expect that the cluster will NOT echo HEALTH OK in a number of placement group peering-related circumstances: You have just created a pool and placement groups haven't peered yet. The placement groups are recovering. You have just added an OSD to or removed an OSD from the cluster. You have just modified the CRUSH map and the placement groups are migrating. There is inconsistent data in different replicas of a placement group. Ceph is scrubbing a placement group's replicas. Ceph doesn't have enough storage capacity to complete backfilling operations. If one of the foregoing circumstances causes Ceph to echo HEALTH WARN , don't panic. In many cases, the cluster will recover on its own. In some cases, you may need to take action. An important aspect of monitoring placement groups is to ensure that when the cluster is up and running that all placement groups are active , and preferably in the clean state. To see the status of all placement groups, execute: Example The result should tell you the placement group map version, vNNNNNN , the total number of placement groups, x , and how many placement groups, y , are in a particular state such as active+clean : Note It is common for Ceph to report multiple states for placement groups. Snapshot Trimming PG States When snapshots exist, two additional PG states will be reported. snaptrim : The PGs are currently being trimmed snaptrim_wait : The PGs are waiting to be trimmed Example Output: In addition to the placement group states, Ceph will also echo back the amount of data used, aa , the amount of storage capacity remaining, bb , and the total storage capacity for the placement group. These numbers can be important in a few cases: You are reaching the near full ratio or full ratio . Your data isn't getting distributed across the cluster due to an error in the CRUSH configuration. Placement Group IDs Placement group IDs consist of the pool number, and not the pool name, followed by a period (.) and the placement group ID- a hexadecimal number. You can view pool numbers and their names from the output of ceph osd lspools . The default pool names data , metadata and rbd correspond to pool numbers 0 , 1 and 2 respectively. A fully qualified placement group ID has the following form: Syntax Example output: To retrieve a list of placement groups: Example To format the output in JSON format and save it to a file: Syntax Example Query a particular placement group: Syntax Example Additional Resources See the chapter Object Storage Daemon (OSD) configuration options in the OSD Object storage daemon configuratopn options section in Red Hat Ceph Storage Configuration Guide for more details on the snapshot trimming settings. 3.2.4. Placement Group creating state When you create a pool, it will create the number of placement groups you specified. Ceph will echo creating when it is creating one or more placement groups. Once they are created, the OSDs that are part of a placement group's Acting Set will peer. Once peering is complete, the placement group status should be active+clean , which means a Ceph client can begin writing to the placement group. 3.2.5. Placement group peering state When Ceph is Peering a placement group, Ceph is bringing the OSDs that store the replicas of the placement group into agreement about the state of the objects and metadata in the placement group. When Ceph completes peering, this means that the OSDs that store the placement group agree about the current state of the placement group. However, completion of the peering process does NOT mean that each replica has the latest contents. Authoritative History Ceph will NOT acknowledge a write operation to a client, until all OSDs of the acting set persist the write operation. This practice ensures that at least one member of the acting set will have a record of every acknowledged write operation since the last successful peering operation. With an accurate record of each acknowledged write operation, Ceph can construct and disseminate a new authoritative history of the placement group. A complete, and fully ordered set of operations that, if performed, would bring an OSD's copy of a placement group up to date. 3.2.6. Placement group active state Once Ceph completes the peering process, a placement group may become active . The active state means that the data in the placement group is generally available in the primary placement group and the replicas for read and write operations. 3.2.7. Placement Group clean state When a placement group is in the clean state, the primary OSD and the replica OSDs have successfully peered and there are no stray replicas for the placement group. Ceph replicated all objects in the placement group the correct number of times. 3.2.8. Placement Group degraded state When a client writes an object to the primary OSD, the primary OSD is responsible for writing the replicas to the replica OSDs. After the primary OSD writes the object to storage, the placement group will remain in a degraded state until the primary OSD has received an acknowledgement from the replica OSDs that Ceph created the replica objects successfully. The reason a placement group can be active+degraded is that an OSD may be active even though it doesn't hold all of the objects yet. If an OSD goes down , Ceph marks each placement group assigned to the OSD as degraded . The Ceph OSDs must peer again when the Ceph OSD comes back online. However, a client can still write a new object to a degraded placement group if it is active . If an OSD is down and the degraded condition persists, Ceph may mark the down OSD as out of the cluster and remap the data from the down OSD to another OSD. The time between being marked down and being marked out is controlled by mon_osd_down_out_interval , which is set to 600 seconds by default. A placement group can also be degraded , because Ceph cannot find one or more objects that Ceph thinks should be in the placement group. While you cannot read or write to unfound objects, you can still access all of the other objects in the degraded placement group. For example, if there are nine OSDs in a three way replica pool. If OSD number 9 goes down, the PGs assigned to OSD 9 goes into a degraded state. If OSD 9 does not recover, it goes out of the storage cluster and the storage cluster rebalances. In that scenario, the PGs are degraded and then recover to an active state. 3.2.9. Placement Group recovering state Ceph was designed for fault-tolerance at a scale where hardware and software problems are ongoing. When an OSD goes down , its contents may fall behind the current state of other replicas in the placement groups. When the OSD is back up , the contents of the placement groups must be updated to reflect the current state. During that time period, the OSD may reflect a recovering state. Recovery is not always trivial, because a hardware failure might cause a cascading failure of multiple Ceph OSDs. For example, a network switch for a rack or cabinet may fail, which can cause the OSDs of a number of host machines to fall behind the current state of the storage cluster. Each one of the OSDs must recover once the fault is resolved. Ceph provides a number of settings to balance the resource contention between new service requests and the need to recover data objects and restore the placement groups to the current state. The osd recovery delay start setting allows an OSD to restart, re-peer and even process some replay requests before starting the recovery process. The osd recovery threads setting limits the number of threads for the recovery process, by default one thread. The osd recovery thread timeout sets a thread timeout, because multiple Ceph OSDs can fail, restart and re-peer at staggered rates. The osd recovery max active setting limits the number of recovery requests a Ceph OSD works on simultaneously to prevent the Ceph OSD from failing to serve . The osd recovery max chunk setting limits the size of the recovered data chunks to prevent network congestion. 3.2.10. Back fill state When a new Ceph OSD joins the storage cluster, CRUSH will reassign placement groups from OSDs in the cluster to the newly added Ceph OSD. Forcing the new OSD to accept the reassigned placement groups immediately can put excessive load on the new Ceph OSD. Backfilling the OSD with the placement groups allows this process to begin in the background. Once backfilling is complete, the new OSD will begin serving requests when it is ready. During the backfill operations, you might see one of several states: backfill_wait indicates that a backfill operation is pending, but isn't underway yet backfill indicates that a backfill operation is underway backfill_too_full indicates that a backfill operation was requested, but couldn't be completed due to insufficient storage capacity. When a placement group cannot be backfilled, it can be considered incomplete . Ceph provides a number of settings to manage the load spike associated with reassigning placement groups to a Ceph OSD, especially a new Ceph OSD. By default, osd_max_backfills sets the maximum number of concurrent backfills to or from a Ceph OSD to 10. The osd backfill full ratio enables a Ceph OSD to refuse a backfill request if the OSD is approaching its full ratio, by default 85%. If an OSD refuses a backfill request, the osd backfill retry interval enables an OSD to retry the request, by default after 10 seconds. OSDs can also set osd backfill scan min and osd backfill scan max to manage scan intervals, by default 64 and 512. For some workloads, it is beneficial to avoid regular recovery entirely and use backfill instead. Since backfilling occurs in the background, this allows I/O to proceed on the objects in the OSD. You can force a backfill rather than a recovery by setting the osd_min_pg_log_entries option to 1 , and setting the osd_max_pg_log_entries option to 2 . Contact your Red Hat Support account team for details on when this situation is appropriate for your workload. 3.2.11. Placement Group remapped state When the Acting Set that services a placement group changes, the data migrates from the old acting set to the new acting set. It may take some time for a new primary OSD to service requests. So it may ask the old primary to continue to service requests until the placement group migration is complete. Once data migration completes, the mapping uses the primary OSD of the new acting set. 3.2.12. Placement Group stale state While Ceph uses heartbeats to ensure that hosts and daemons are running, the ceph-osd daemons may also get into a stuck state where they aren't reporting statistics in a timely manner. For example, a temporary network fault. By default, OSD daemons report their placement group, up thru, boot and failure statistics every half second, that is, 0.5 , which is more frequent than the heartbeat thresholds. If the Primary OSD of a placement group's acting set fails to report to the monitor or if other OSDs have reported the primary OSD down , the monitors will mark the placement group stale . When you start the storage cluster, it is common to see the stale state until the peering process completes. After the storage cluster has been running for awhile, seeing placement groups in the stale state indicates that the primary OSD for those placement groups is down or not reporting placement group statistics to the monitor. 3.2.13. Placement Group misplaced state There are some temporary backfilling scenarios where a PG gets mapped temporarily to an OSD. When that temporary situation should no longer be the case, the PGs might still reside in the temporary location and not in the proper location. In which case, they are said to be misplaced . That's because the correct number of extra copies actually exist, but one or more copies is in the wrong place. For example, there are 3 OSDs: 0,1,2 and all PGs map to some permutation of those three. If you add another OSD (OSD 3), some PGs will now map to OSD 3 instead of one of the others. However, until OSD 3 is backfilled, the PG will have a temporary mapping allowing it to continue to serve I/O from the old mapping. During that time, the PG is misplaced , because it has a temporary mapping, but not degraded , since there are 3 copies. Example [0,1,2] is a temporary mapping, so the up set is not equal to the acting set and the PG is misplaced but not degraded since [0,1,2] is still three copies. Example OSD 3 is now backfilled and the temporary mapping is removed, not degraded and not misplaced. 3.2.14. Placement Group incomplete state A PG goes into a incomplete state when there is incomplete content and peering fails, that is, when there are no complete OSDs which are current enough to perform recovery. Lets say OSD 1, 2, and 3 are the acting OSD set and it switches to OSD 1, 4, and 3, then osd.1 will request a temporary acting set of OSD 1, 2, and 3 while backfilling 4. During this time, if OSD 1, 2, and 3 all go down, osd.4 will be the only one left which might not have fully backfilled all the data. At this time, the PG will go incomplete indicating that there are no complete OSDs which are current enough to perform recovery. Alternately, if osd.4 is not involved and the acting set is simply OSD 1, 2, and 3 when OSD 1, 2, and 3 go down, the PG would likely go stale indicating that the mons have not heard anything on that PG since the acting set changed. The reason being there are no OSDs left to notify the new OSDs. 3.2.15. Identifying stuck Placement Groups A placement group is not necessarily problematic just because it is not in a active+clean state. Generally, Ceph's ability to self repair might not be working when placement groups get stuck. The stuck states include: Unclean : Placement groups contain objects that are not replicated the desired number of times. They should be recovering. Inactive : Placement groups cannot process reads or writes because they are waiting for an OSD with the most up-to-date data to come back up . Stale : Placement groups are in an unknown state, because the OSDs that host them have not reported to the monitor cluster in a while, and can be configured with the mon osd report timeout setting. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To identify stuck placement groups, execute the following: Syntax Example 3.2.16. Finding an object's location The Ceph client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to an OSD dynamically. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To find the object location, all you need is the object name and the pool name: Syntax Example
[ "root@host01 ~]# cephadm shell", "ceph health HEALTH_OK", "ceph status", "root@host01 ~]# cephadm shell", "ceph -w cluster: id: 8c9b0072-67ca-11eb-af06-001a4a0002a0 health: HEALTH_OK services: mon: 2 daemons, quorum Ceph5-2,Ceph5-adm (age 3d) mgr: Ceph5-1.nqikfh(active, since 3w), standbys: Ceph5-adm.meckej osd: 5 osds: 5 up (since 2d), 5 in (since 8w) rgw: 2 daemons active (test_realm.test_zone.Ceph5-2.bfdwcn, test_realm.test_zone.Ceph5-adm.acndrh) data: pools: 11 pools, 273 pgs objects: 459 objects, 32 KiB usage: 2.6 GiB used, 72 GiB / 75 GiB avail pgs: 273 active+clean io: client: 170 B/s rd, 730 KiB/s wr, 0 op/s rd, 729 op/s wr 2021-06-02 15:45:21.655871 osd.0 [INF] 17.71 deep-scrub ok 2021-06-02 15:45:47.880608 osd.1 [INF] 1.0 scrub ok 2021-06-02 15:45:48.865375 osd.1 [INF] 1.3 scrub ok 2021-06-02 15:45:50.866479 osd.1 [INF] 1.4 scrub ok 2021-06-02 15:45:01.345821 mon.0 [INF] pgmap v41339: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:05.718640 mon.0 [INF] pgmap v41340: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:53.997726 osd.1 [INF] 1.5 scrub ok 2021-06-02 15:45:06.734270 mon.0 [INF] pgmap v41341: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:15.722456 mon.0 [INF] pgmap v41342: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:46:06.836430 osd.0 [INF] 17.75 deep-scrub ok 2021-06-02 15:45:55.720929 mon.0 [INF] pgmap v41343: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail", "ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 5 TiB 2.9 TiB 2.1 TiB 2.1 TiB 42.98 TOTAL 5 TiB 2.9 TiB 2.1 TiB 2.1 TiB 42.98 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL .mgr 1 1 5.3 MiB 3 16 MiB 0 629 GiB .rgw.root 2 32 1.3 KiB 4 48 KiB 0 629 GiB default.rgw.log 3 32 3.6 KiB 209 408 KiB 0 629 GiB default.rgw.control 4 32 0 B 8 0 B 0 629 GiB default.rgw.meta 5 32 1.7 KiB 10 96 KiB 0 629 GiB default.rgw.buckets.index 7 32 5.5 MiB 22 17 MiB 0 629 GiB default.rgw.buckets.data 8 32 807 KiB 3 2.4 MiB 0 629 GiB default.rgw.buckets.non-ec 9 32 1.0 MiB 1 3.1 MiB 0 629 GiB source-ecpool-86 11 32 1.2 TiB 391.13k 2.1 TiB 53.49 1.1 TiB", "ceph osd df ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS 3 hdd 0.90959 1.00000 931GiB 70.1GiB 69.1GiB 0B 1GiB 861GiB 7.53 2.93 66 4 hdd 0.90959 1.00000 931GiB 1.30GiB 308MiB 0B 1GiB 930GiB 0.14 0.05 59 0 hdd 0.90959 1.00000 931GiB 18.1GiB 17.1GiB 0B 1GiB 913GiB 1.94 0.76 57 MIN/MAX VAR: 0.02/2.98 STDDEV: 2.91", "cephadm shell", "ceph status", "ceph -s", "ceph ceph> status cluster: id: 499829b4-832f-11eb-8d6d-001a4a000635 health: HEALTH_WARN 1 stray daemon(s) not managed by cephadm 1/3 mons down, quorum host03,host02 too many PGs per OSD (261 > max 250) services: mon: 3 daemons, quorum host03,host02 (age 3d), out of quorum: host01 mgr: host01.hdhzwn(active, since 9d), standbys: host05.eobuuv, host06.wquwpj osd: 12 osds: 11 up (since 2w), 11 in (since 5w) rgw: 2 daemons active (test_realm.test_zone.host04.hgbvnq, test_realm.test_zone.host05.yqqilm) rgw-nfs: 1 daemon active (nfs.foo.host06-rgw) data: pools: 8 pools, 960 pgs objects: 414 objects, 1.0 MiB usage: 5.7 GiB used, 214 GiB / 220 GiB avail pgs: 960 active+clean io: client: 41 KiB/s rd, 0 B/s wr, 41 op/s rd, 27 op/s wr ceph> health HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1/3 mons down, quorum host03,host02; too many PGs per OSD (261 > max 250) ceph> mon stat e3: 3 mons at {host01=[v2:10.74.255.0:3300/0,v1:10.74.255.0:6789/0],host02=[v2:10.74.249.253:3300/0,v1:10.74.249.253:6789/0],host03=[v2:10.74.251.164:3300/0,v1:10.74.251.164:6789/0]}, election epoch 6688, leader 1 host03, quorum 1,2 host03,host02", "cephadm shell", "ceph mon stat", "ceph mon dump", "ceph quorum_status -f json-pretty", "{ \"election_epoch\": 6686, \"quorum\": [ 0, 1, 2 ], \"quorum_names\": [ \"host01\", \"host03\", \"host02\" ], \"quorum_leader_name\": \"host01\", \"quorum_age\": 424884, \"features\": { \"quorum_con\": \"4540138297136906239\", \"quorum_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ] }, \"monmap\": { \"epoch\": 3, \"fsid\": \"499829b4-832f-11eb-8d6d-001a4a000635\", \"modified\": \"2021-03-15T04:51:38.621737Z\", \"created\": \"2021-03-12T12:35:16.911339Z\", \"min_mon_release\": 16, \"min_mon_release_name\": \"pacific\", \"election_strategy\": 1, \"disallowed_leaders: \": \"\", \"stretch_mode\": false, \"features\": { \"persistent\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"optional\": [] }, \"mons\": [ { \"rank\": 0, \"name\": \"host01\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.255.0:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.255.0:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.255.0:6789/0\", \"public_addr\": \"10.74.255.0:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 1, \"name\": \"host03\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.251.164:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.251.164:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.251.164:6789/0\", \"public_addr\": \"10.74.251.164:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 2, \"name\": \"host02\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.253:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.253:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.253:6789/0\", \"public_addr\": \"10.74.249.253:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" } ] } }", "Error 111: Connection Refused", "cephadm shell", "ceph daemon MONITOR_ID COMMAND", "ceph daemon mon.host01 help { \"add_bootstrap_peer_hint\": \"add peer address as potential bootstrap peer for cluster bringup\", \"add_bootstrap_peer_hintv\": \"add peer address vector as potential bootstrap peer for cluster bringup\", \"compact\": \"cause compaction of monitor's leveldb/rocksdb storage\", \"config diff\": \"dump diff of current config and default config\", \"config diff get\": \"dump diff get <field>: dump diff of current and default config setting <field>\", \"config get\": \"config get <field>: get the config value\", \"config help\": \"get config setting schema and descriptions\", \"config set\": \"config set <field> <val> [<val> ...]: set a config variable\", \"config show\": \"dump current config settings\", \"config unset\": \"config unset <field>: unset a config variable\", \"connection scores dump\": \"show the scores used in connectivity-based elections\", \"connection scores reset\": \"reset the scores used in connectivity-based elections\", \"counter dump\": \"dump all labeled and non-labeled counters and their values\", \"counter schema\": \"dump all labeled and non-labeled counters schemas\", \"dump_historic_ops\": \"show recent ops\", \"dump_historic_slow_ops\": \"show recent slow ops\", \"dump_mempools\": \"get mempool stats\", \"get_command_descriptions\": \"list available commands\", \"git_version\": \"get git sha1\", \"heap\": \"show heap usage info (available only if compiled with tcmalloc)\", \"help\": \"list available commands\", \"injectargs\": \"inject configuration arguments into running daemon\", \"log dump\": \"dump recent log entries to log file\", \"log flush\": \"flush log entries to log file\", \"log reopen\": \"reopen log file\", \"mon_status\": \"report status of monitors\", \"ops\": \"show the ops currently in flight\", \"perf dump\": \"dump non-labeled counters and their values\", \"perf histogram dump\": \"dump perf histogram values\", \"perf histogram schema\": \"dump perf histogram schema\", \"perf reset\": \"perf reset <name>: perf reset all or one perfcounter name\", \"perf schema\": \"dump non-labeled counters schemas\", \"quorum enter\": \"force monitor back into quorum\", \"quorum exit\": \"force monitor out of the quorum\", \"sessions\": \"list existing sessions\", \"smart\": \"Query health metrics for underlying device\", \"sync_force\": \"force sync of and clear monitor store\", \"version\": \"get ceph version\" }", "ceph daemon mon.host01 mon_status { \"name\": \"host01\", \"rank\": 0, \"state\": \"leader\", \"election_epoch\": 120, \"quorum\": [ 0, 1, 2 ], \"quorum_age\": 206358, \"features\": { \"required_con\": \"2449958747317026820\", \"required_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"quorum_con\": \"4540138297136906239\", \"quorum_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ] }, \"outside_quorum\": [], \"extra_probe_peers\": [], \"sync_provider\": [], \"monmap\": { \"epoch\": 3, \"fsid\": \"81a4597a-b711-11eb-8cb8-001a4a000740\", \"modified\": \"2021-05-18T05:50:17.782128Z\", \"created\": \"2021-05-17T13:13:13.383313Z\", \"min_mon_release\": 16, \"min_mon_release_name\": \"pacific\", \"election_strategy\": 1, \"disallowed_leaders: \": \"\", \"stretch_mode\": false, \"features\": { \"persistent\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"optional\": [] }, \"mons\": [ { \"rank\": 0, \"name\": \"host01\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.41:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.41:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.41:6789/0\", \"public_addr\": \"10.74.249.41:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 1, \"name\": \"host02\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.55:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.55:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.55:6789/0\", \"public_addr\": \"10.74.249.55:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 2, \"name\": \"host03\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.49:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.49:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.49:6789/0\", \"public_addr\": \"10.74.249.49:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" } ] }, \"feature_map\": { \"mon\": [ { \"features\": \"0x3f01cfb9fffdffff\", \"release\": \"luminous\", \"num\": 1 } ], \"osd\": [ { \"features\": \"0x3f01cfb9fffdffff\", \"release\": \"luminous\", \"num\": 3 } ] }, \"stretch_mode\": false }", "ceph daemon /var/run/ceph/ SOCKET_FILE COMMAND", "ceph daemon /var/run/ceph/ceph-osd.0.asok status { \"cluster_fsid\": \"9029b252-1668-11ee-9399-001a4a000429\", \"osd_fsid\": \"1de9b064-b7a5-4c54-9395-02ccda637d21\", \"whoami\": 0, \"state\": \"active\", \"oldest_map\": 1, \"newest_map\": 58, \"num_pgs\": 33 }", "ls /var/run/ceph", "ceph osd stat", "ceph osd dump", "eNNNN: x osds: y up, z in", "ceph osd tree id weight type name up/down reweight -1 3 pool default -3 3 rack mainrack -2 3 host osd-host 0 1 osd.0 up 1 1 1 osd.1 up 1 2 1 osd.2 up 1", "systemctl start CEPH_OSD_SERVICE_ID", "systemctl start [email protected]", "cephadm shell", "ceph pg dump", "ceph pg map PG_NUM", "ceph pg map 128", "ceph pg stat", "vNNNNNN: x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail", "244 active+clean+snaptrim_wait 32 active+clean+snaptrim", "POOL_NUM . PG_ID", "0.1f", "ceph pg dump", "ceph pg dump -o FILE_NAME --format=json", "ceph pg dump -o test --format=json", "ceph pg POOL_NUM . PG_ID query", "ceph pg 5.fe query { \"snap_trimq\": \"[]\", \"snap_trimq_len\": 0, \"state\": \"active+clean\", \"epoch\": 2449, \"up\": [ 3, 8, 10 ], \"acting\": [ 3, 8, 10 ], \"acting_recovery_backfill\": [ \"3\", \"8\", \"10\" ], \"info\": { \"pgid\": \"5.ff\", \"last_update\": \"0'0\", \"last_complete\": \"0'0\", \"log_tail\": \"0'0\", \"last_user_version\": 0, \"last_backfill\": \"MAX\", \"purged_snaps\": [], \"history\": { \"epoch_created\": 114, \"epoch_pool_created\": 82, \"last_epoch_started\": 2402, \"last_interval_started\": 2401, \"last_epoch_clean\": 2402, \"last_interval_clean\": 2401, \"last_epoch_split\": 114, \"last_epoch_marked_full\": 0, \"same_up_since\": 2401, \"same_interval_since\": 2401, \"same_primary_since\": 2086, \"last_scrub\": \"0'0\", \"last_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"last_deep_scrub\": \"0'0\", \"last_deep_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"last_clean_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"prior_readable_until_ub\": 0 }, \"stats\": { \"version\": \"0'0\", \"reported_seq\": \"2989\", \"reported_epoch\": \"2449\", \"state\": \"active+clean\", \"last_fresh\": \"2021-06-18T05:16:59.401080+0000\", \"last_change\": \"2021-06-17T01:32:03.764162+0000\", \"last_active\": \"2021-06-18T05:16:59.401080+0000\", .", "pg 1.5: up=acting: [0,1,2] ADD_OSD_3 pg 1.5: up: [0,3,1] acting: [0,1,2]", "pg 1.5: up=acting: [0,3,1]", "ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}", "ceph pg dump_stuck stale OK", "ceph osd map POOL_NAME OBJECT_NAME", "ceph osd map mypool myobject" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/administration_guide/monitoring-a-ceph-storage-cluster
Chapter 2. Installing the Red Hat Quay Operator from the OperatorHub
Chapter 2. Installing the Red Hat Quay Operator from the OperatorHub Use the following procedure to install the Red Hat Quay Operator from the OpenShift Container Platform OperatorHub. Procedure Using the OpenShift Container Platform console, select Operators OperatorHub . In the search box, type Red Hat Quay and select the official Red Hat Quay Operator provided by Red Hat. This directs you to the Installation page, which outlines the features, prerequisites, and deployment information. Select Install . This directs you to the Operator Installation page. The following choices are available for customizing the installation: Update Channel: Choose the update channel, for example, stable-3.10 for the latest release. Installation Mode: Choose All namespaces on the cluster if you want the Red Hat Quay Operator to be available cluster-wide. It is recommended that you install the Red Hat Quay Operator cluster-wide. If you choose a single namespace, the monitoring component will not be available by default. Choose A specific namespace on the cluster if you want it deployed only within a single namespace. Approval Strategy: Choose to approve either automatic or manual updates. Automatic update strategy is recommended. Select Install .
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-install
Chapter 2. Dashboard cards
Chapter 2. Dashboard cards The Cryostat Dashboard displays information about target Java Virtual Machines (JVMs) in the form of cards on the user interface. Each card displays a different set of information or metrics about the selected target JVM. For example, heap usage, thread statistics, or JVM vendor information. The following dashboard cards are available: Target JVM Details Automated Analysis MBean Metrics Chart Target JVM Details The Target JVM Details card provides high-level information that relates to the selected target JVM. Figure 2.1. Example Target JVM Details dashboard card On the Details tab, you can view information such as the connection URL, labels, JVM ID, and annotations for the selected target JVM. You can also view the JVM start time, version, vendor, operating system architecture, and the number of available processors. You can perform additional actions directly from the card. By clicking Actions , you can view recordings, start new recordings, or create automated rules for the selected target JVM. On the Resources tab, you can view details about the resources related to the target JVM, such as the number of active recordings or the number of automated rules. Automated Analysis Automated analysis is a JDK Mission Control (JMC) tool with which you can diagnose issues with your target JVMs by analyzing JDK Flight Recording (JFR) data for potential errors. Cryostat integrates the JMC automated analysis reports and produces a report that shows any errors associated with the data. The Automated Analysis card provides an alternative way of displaying this report information. Figure 2.2. Example Automated Analysis dashboard card On the Automated Analysis card, you can create a JFR recording that Cryostat uses to periodically evaluate any configuration or performance issues with the selected target JVM. After you click the corresponding label for each result, the card displays the following information: Results of the analysis categorized according to a severity score. Severity scores range from 0 , which means no error, to 100 , which means a potentially critical error. You might also receive a severity score marked as N/A , which indicates that the severity score is not applicable to the recording. A description of the results that includes a summary, an explanation of the error, and a potential solution, if applicable. You can choose to display the card information in a list format by selecting List view . Figure 2.3. Example Automated Analysis dashboard card displayed as a list view MBean Metrics Chart The MBean Metrics Chart card displays performance metrics about the target JVM through remote access to supported MXBeans interfaces of the JVM, including Thread, Runtime, OperatingSystem, and Memory MXBeans. Cryostat gathers a range of data from these MXBeans interfaces and displays them in the MBean Metrics Chart cards. From the Performance Metric field, you can select the metric you want to view, for example, Process CPU Load , Physical Memory , or Heap Memory Usage , then configure the card details. Once configured, cards showing each metric are displayed on your dashboard. Figure 2.4. Example MBean Metrics Chart card
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_the_cryostat_dashboard/dashboard-cards_con_overview-cryostat-dashboard
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/scaling_storage/making-open-source-more-inclusive
Chapter 4. Configuring the Narayana transaction manager
Chapter 4. Configuring the Narayana transaction manager In Fuse, the built-in, global transaction manager is JBoss Narayana Transaction Manager , which is the same transaction manager that is used by Enterprise Application Platform (EAP) 7. In the OSGi runtime, as in Fuse for Karaf, the additional integration layer is provided by the PAX TRANSX project. The following topics discuss Narayana configuration: Section 4.1, "About Narayana installation" Section 4.2, "Transaction protocols supported" Section 4.3, "About Narayana configuration" Section 4.4, "Configuring log storage" 4.1. About Narayana installation The Narayana transaction manager is exposed for use in OSGi bundles under the following interfaces, as well as a few additional support interfaces: javax.transaction.TransactionManager javax.transaction.UserTransaction org.springframework.transaction.PlatformTransactionManager org.ops4j.pax.transx.tm.TransactionManager The 7.13.0.fuse-7_13_0-00012-redhat-00001 distribution makes these interfaces available from the start. The pax-transx-tm-narayana feature contains an overridden bundle that embeds Narayana: The services provided by the fuse-pax-transx-tm-narayana bundle are: Because this bundle registers org.osgi.service.cm.ManagedService , it tracks and reacts to the changes in CM configurations: The default org.ops4j.pax.transx.tm.narayana PID is: In summary: Fuse for Karaf includes the fully-featured, global, Narayana transaction manager. The transaction manager is correctly exposed under various client interfaces (JTA, Spring-tx, PAX JMS). You can configure Narayana by using the standard OSGi method, Configuration Admin, which is available in org.ops4j.pax.transx.tm.narayana . The default configuration is provided in USDFUSE_HOME/etc/org.ops4j.pax.transx.tm.narayana.cfg . 4.2. Transaction protocols supported The Narayana transaction manager is the JBoss/Red Hat product that is used in EAP. Narayana is a transactions toolkit that provides support for applications that are developed using a broad range of standards-based transaction protocols: JTA JTS Web-Service Transactions REST Transactions STM XATMI/TX 4.3. About Narayana configuration The pax-transx-tm-narayana bundle includes the jbossts-properties.xml file, which provides the default configuration of different aspects of the transaction manager. All of these properties may be overriden in the USDFUSE_HOME/etc/org.ops4j.pax.transx.tm.narayana.cfg file directly or by using the Configuration Admin API. The basic configuration of Narayana is done through various EnvironmentBean objects. Every such bean may be configured by using properties with different prefixes. The following table provides a summary of configuration objects and prefixes used: Configuration Bean Property Prefix com.arjuna.ats.arjuna.common.CoordinatorEnvironmentBean com.arjuna.ats.arjuna.coordinator com.arjuna.ats.arjuna.common.CoreEnvironmentBean com.arjuna.ats.arjuna com.arjuna.ats.internal.arjuna.objectstore.hornetq.HornetqJournalEnvironmentBean com.arjuna.ats.arjuna.hornetqjournal com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean com.arjuna.ats.arjuna.objectstore com.arjuna.ats.arjuna.common.RecoveryEnvironmentBean com.arjuna.ats.arjuna.recovery com.arjuna.ats.jdbc.common.JDBCEnvironmentBean com.arjuna.ats.jdbc com.arjuna.ats.jta.common.JTAEnvironmentBean com.arjuna.ats.jta com.arjuna.ats.txoj.common.TxojEnvironmentBean com.arjuna.ats.txoj.lockstore The prefix can simplify the configuration. However, you should typically use either of the following formats: NameEnvironmentBean . propertyName (the preferred format), or fully-qualified-class-name . field-name For example, consider the com.arjuna.ats.arjuna.common.CoordinatorEnvironmentBean.commitOnePhase field. It may be configured by using the com.arjuna.ats.arjuna.common.CoordinatorEnvironmentBean.commitOnePhase property or it can be configured by using the simpler (preferred) form CoordinatorEnvironmentBean.commitOnePhase . Full details of how to set properties and which beans can be configured can be found in the Narayana Product Documentation . Some beans, such as the ObjectStoreEnvironmentBean , may be configured multiple times with each named instance providing configuration for a different purposes. In this case, the name of the instance is used between the prefix (any of the above) and field-name . For example, a type of object store for an ObjectStoreEnvironmentBean instance that is named communicationStore may be configured by using properties that are named: com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.communicationStore.objectStoreType ObjectStoreEnvironmentBean.communicationStore.objectStoreType 4.4. Configuring log storage The most important configuration is the type and location of object log storage. There are typically three implementations of the com.arjuna.ats.arjuna.objectstore.ObjectStoreAPI interface: com.arjuna.ats.internal.arjuna.objectstore.hornetq.HornetqObjectStoreAdaptor Uses org.apache.activemq.artemis.core.journal.Journal storage from AMQ 7 internally. com.arjuna.ats.internal.arjuna.objectstore.jdbc.JDBCStore Uses JDBC to keep TX log files. com.arjuna.ats.internal.arjuna.objectstore.FileSystemStore (and specialized implementations) Uses custom file-based log storage. By default, Fuse uses com.arjuna.ats.internal.arjuna.objectstore.ShadowNoFileLockStore , which is a specialized implementation of FileSystemStore . There are three stores that are used by Narayana for which transaction/object logs are kept: defaultStore communicationStore stateStore See State management in Narayana documentation for more details. The default configuration of these three stores is: ShadowNoFileLockStore is configured with the base directory ( objectStoreDir ) and the particular store's directory ( localOSRoot ). The many configuration options are contained in the Narayana documentation guide . However, the Narayana documentation states that the canonical reference for configuration options is the Javadoc for the various EnvironmentBean classes.
[ "karaf@root()> feature:info pax-transx-tm-narayana Feature pax-transx-tm-narayana 0.3.0 Feature has no configuration Feature has no configuration files Feature depends on: pax-transx-tm-api 0.0.0 Feature contains followed bundles: mvn:org.jboss.fuse.modules/fuse-pax-transx-tm-narayana/7.0.0.fuse-000191-redhat-1 (overriden from mvn:org.ops4j.pax.transx/pax-transx-tm-narayana/0.3.0) Feature has no conditionals.", "karaf@root()> bundle:services fuse-pax-transx-tm-narayana Red Hat Fuse :: Fuse Modules :: Transaction (21) provides: ---------------------------------------------------------- [org.osgi.service.cm.ManagedService] [javax.transaction.TransactionManager] [javax.transaction.TransactionSynchronizationRegistry] [javax.transaction.UserTransaction] [org.jboss.narayana.osgi.jta.ObjStoreBrowserService] [org.ops4j.pax.transx.tm.TransactionManager] [org.springframework.transaction.PlatformTransactionManager]", "karaf@root()> bundle:services -p fuse-pax-transx-tm-narayana Red Hat Fuse :: Fuse Modules :: Transaction (21) provides: ---------------------------------------------------------- objectClass = [org.osgi.service.cm.ManagedService] service.bundleid = 21 service.id = 232 service.pid = org.ops4j.pax.transx.tm.narayana service.scope = singleton", "karaf@root()> config:list '(service.pid=org.ops4j.pax.transx.tm.narayana)' ---------------------------------------------------------------- Pid: org.ops4j.pax.transx.tm.narayana BundleLocation: ? Properties: com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.communicationStore.localOSRoot = communicationStore com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.communicationStore.objectStoreDir = /data/servers/7.13.0.fuse-7_13_0-00012-redhat-00001/data/narayana com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.communicationStore.objectStoreType = com.arjuna.ats.internal.arjuna.objectstore.ShadowNoFileLockStore com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.localOSRoot = defaultStore com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.objectStoreDir = /data/servers/7.13.0.fuse-7_13_0-00012-redhat-00001/data/narayana com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.objectStoreType = com.arjuna.ats.internal.arjuna.objectstore.ShadowNoFileLockStore com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.stateStore.localOSRoot = stateStore com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.stateStore.objectStoreDir = /data/servers/7.13.0.fuse-7_13_0-00012-redhat-00001/data/narayana com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.stateStore.objectStoreType = com.arjuna.ats.internal.arjuna.objectstore.ShadowNoFileLockStore com.arjuna.ats.arjuna.common.RecoveryEnvironmentBean.recoveryBackoffPeriod = 10 felix.fileinstall.filename = file:/data/servers/7.13.0.fuse-7_13_0-00012-redhat-00001/etc/org.ops4j.pax.transx.tm.narayana.cfg service.pid = org.ops4j.pax.transx.tm.narayana", "default store com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.objectStoreType = com.arjuna.ats.internal.arjuna.objectstore.ShadowNoFileLockStore com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.objectStoreDir = USD{karaf.data}/narayana com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.localOSRoot = defaultStore communication store com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.communicationStore.objectStoreType = com.arjuna.ats.internal.arjuna.objectstore.ShadowNoFileLockStore com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.communicationStore.objectStoreDir = USD{karaf.data}/narayana com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.communicationStore.localOSRoot = communicationStore state store com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.stateStore.objectStoreType = com.arjuna.ats.internal.arjuna.objectstore.ShadowNoFileLockStore com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.stateStore.objectStoreDir = USD{karaf.data}/narayana com.arjuna.ats.arjuna.common.ObjectStoreEnvironmentBean.stateStore.localOSRoot = stateStore" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_transaction_guide/configuring-narayana
Chapter 4. Deploying Red Hat Quay on infrastructure nodes
Chapter 4. Deploying Red Hat Quay on infrastructure nodes By default, Quay related pods are placed on arbitrary worker nodes when using the Red Hat Quay Operator to deploy the registry. For more information about how to use machine sets to configure nodes to only host infrastructure components, see Creating infrastructure machine sets . If you are not using OpenShift Container Platform machine set resources to deploy infra nodes, the section in this document shows you how to manually label and taint nodes for infrastructure purposes. After you have configured your infrastructure nodes either manually or use machines sets, you can control the placement of Quay pods on these nodes using node selectors and tolerations. 4.1. Labeling and tainting nodes for infrastructure use Use the following procedure to label and tain nodes for infrastructure use. Enter the following command to reveal the master and worker nodes. In this example, there are three master nodes and six worker nodes. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready worker 3h22m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 Enter the following commands to label the three worker nodes for infrastructure use: USD oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra= USD oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra= USD oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra= Now, when listing the nodes in the cluster, the last three worker nodes have the infra role. For example: USD oc get nodes Example NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 4h15m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba4558 When a worker node is assigned the infra role, there is a chance that user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node, and then add tolerations for the pods that you want to control. For example: USD oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule USD oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule USD oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule 4.2. Creating a project with node selector and tolerations Use the following procedure to create a project with node selector and tolerations. Note The following procedure can also be completed by removing the installed Red Hat Quay Operator and the namespace, or namespaces, used when creating the deployment. Users can then create a new resource with the following annotation. Procedure Enter the following command to edit the namespace where Red Hat Quay is deployed, and the following annotation: USD oc annotate namespace <namespace> openshift.io/node-selector='node-role.kubernetes.io/infra=' Example output namespace/<namespace> annotated Obtain a list of available pods by entering the following command: USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-registry-clair-app-5744dd64c9-9d5jt 1/1 Running 0 173m 10.130.4.13 stevsmit-quay-ocp-tes-5gwws-worker-c-6xkn7 <none> <none> example-registry-clair-app-5744dd64c9-fg86n 1/1 Running 6 (3h21m ago) 3h24m 10.131.0.91 stevsmit-quay-ocp-tes-5gwws-worker-c-dnhdp <none> <none> example-registry-clair-postgres-845b47cd88-vdchz 1/1 Running 0 3h21m 10.130.4.10 stevsmit-quay-ocp-tes-5gwws-worker-c-6xkn7 <none> <none> example-registry-quay-app-64cbc5bcf-8zvgc 1/1 Running 1 (3h24m ago) 3h24m 10.130.2.12 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none> example-registry-quay-app-64cbc5bcf-pvlz6 1/1 Running 0 3h24m 10.129.4.10 stevsmit-quay-ocp-tes-5gwws-worker-b-fjhz4 <none> <none> example-registry-quay-app-upgrade-8gspn 0/1 Completed 0 3h24m 10.130.2.10 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none> example-registry-quay-database-784d78b6f8-2vkml 1/1 Running 0 3h24m 10.131.4.10 stevsmit-quay-ocp-tes-5gwws-worker-c-2frtg <none> <none> example-registry-quay-mirror-d5874d8dc-fmknp 1/1 Running 0 3h24m 10.129.4.9 stevsmit-quay-ocp-tes-5gwws-worker-b-fjhz4 <none> <none> example-registry-quay-mirror-d5874d8dc-t4mff 1/1 Running 0 3h24m 10.129.2.19 stevsmit-quay-ocp-tes-5gwws-worker-a-k7w86 <none> <none> example-registry-quay-redis-79848898cb-6qf5x 1/1 Running 0 3h24m 10.130.2.11 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none> Enter the following command to delete the available pods: USD oc delete pods --selector quay-operator/quayregistry=example-registry -n quay-enterprise Example output pod "example-registry-clair-app-5744dd64c9-9d5jt" deleted pod "example-registry-clair-app-5744dd64c9-fg86n" deleted pod "example-registry-clair-postgres-845b47cd88-vdchz" deleted pod "example-registry-quay-app-64cbc5bcf-8zvgc" deleted pod "example-registry-quay-app-64cbc5bcf-pvlz6" deleted pod "example-registry-quay-app-upgrade-8gspn" deleted pod "example-registry-quay-database-784d78b6f8-2vkml" deleted pod "example-registry-quay-mirror-d5874d8dc-fmknp" deleted pod "example-registry-quay-mirror-d5874d8dc-t4mff" deleted pod "example-registry-quay-redis-79848898cb-6qf5x" deleted After the pods have been deleted, they automatically cycle back up and should be scheduled on the dedicated infrastructure nodes. 4.3. Installing Red Hat Quay on OpenShift Container Platform on a specific namespace Use the following procedure to install Red Hat Quay on OpenShift Container Platform in a specific namespace. To install the Red Hat Quay Operator in a specific namespace, you must explicitly specify the appropriate project namespace, as in the following command. In the following example, the quay-registry namespace is used. This results in the quay-operator pod landing on one of the three infrastructure nodes. For example: USD oc get pods -n quay-registry -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 30s 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal 4.4. Creating the Red Hat Quay registry Use the following procedure to create the Red Hat Quay registry. Enter the following command to create the Red Hat Quay registry. Then, wait for the deployment to be marked as ready . In the following example, you should see that they have only been scheduled on the three nodes that you have labelled for infrastructure purposes. USD oc get pods -n quay-registry -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE example-registry-clair-app-789d6d984d-gpbwd 1/1 Running 1 5m57s 10.130.2.80 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-clair-postgres-7c8697f5-zkzht 1/1 Running 0 4m53s 10.129.2.19 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-app-56dd755b6d-glbf7 1/1 Running 1 5m57s 10.129.2.17 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-database-8dc7cfd69-dr2cc 1/1 Running 0 5m43s 10.129.2.18 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-mirror-78df886bcc-v75p9 1/1 Running 0 5m16s 10.131.0.24 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-postgres-init-8s8g9 0/1 Completed 0 5m54s 10.130.2.79 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-quay-redis-5688ddcdb6-ndp4t 1/1 Running 0 5m56s 10.130.2.78 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 22m 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal 4.5. Resizing Managed Storage When deploying Red Hat Quay on OpenShift Container Platform, three distinct persistent volume claims (PVCs) are deployed: One for the PostgreSQL 13 registry. One for the Clair PostgreSQL 13 registry. One that uses NooBaa as a backend storage. Note The connection between Red Hat Quay and NooBaa is done through the S3 API and ObjectBucketClaim API in OpenShift Container Platform. Red Hat Quay leverages that API group to create a bucket in NooBaa, obtain access keys, and automatically set everything up. On the backend, or NooBaa, side, that bucket is creating inside of the backing store. As a result, NooBaa PVCs are not mounted or connected to Red Hat Quay pods. The default size for the PostgreSQL 13 and Clair PostgreSQL 13 PVCs is set to 50 GiB. You can expand storage for these PVCs on the OpenShift Container Platform console by using the following procedure. Note The following procedure shares commonality with Expanding Persistent Volume Claims on Red Hat OpenShift Data Foundation. 4.5.1. Resizing PostgreSQL 13 PVCs on Red Hat Quay Use the following procedure to resize the PostgreSQL 13 and Clair PostgreSQL 13 PVCs. Prerequisites You have cluster admin privileges on OpenShift Container Platform. Procedure Log into the OpenShift Container Platform console and select Storage Persistent Volume Claims . Select the desired PersistentVolumeClaim for either PostgreSQL 13 or Clair PostgreSQL 13, for example, example-registry-quay-postgres-13 . From the Action menu, select Expand PVC . Enter the new size of the Persistent Volume Claim and select Expand . After a few minutes, the expanded size should reflect in the PVC's Capacity field. 4.6. Customizing Default Operator Images Note Currently, customizing default Operator images is not supported on IBM Power and IBM Z. In certain circumstances, it might be useful to override the default images used by the Red Hat Quay Operator. This can be done by setting one or more environment variables in the Red Hat Quay Operator ClusterServiceVersion . Important Using this mechanism is not supported for production Red Hat Quay environments and is strongly encouraged only for development or testing purposes. There is no guarantee your deployment will work correctly when using non-default images with the Red Hat Quay Operator. 4.6.1. Environment Variables The following environment variables are used in the Red Hat Quay Operator to override component images: Environment Variable Component RELATED_IMAGE_COMPONENT_QUAY base RELATED_IMAGE_COMPONENT_CLAIR clair RELATED_IMAGE_COMPONENT_POSTGRES postgres and clair databases RELATED_IMAGE_COMPONENT_REDIS redis Note Overridden images must be referenced by manifest (@sha256:) and not by tag (:latest). 4.6.2. Applying overrides to a running Operator When the Red Hat Quay Operator is installed in a cluster through the Operator Lifecycle Manager (OLM) , the managed component container images can be easily overridden by modifying the ClusterServiceVersion object. Use the following procedure to apply overrides to a running Red Hat Quay Operator. Procedure The ClusterServiceVersion object is Operator Lifecycle Manager's representation of a running Operator in the cluster. Find the Red Hat Quay Operator's ClusterServiceVersion by using a Kubernetes UI or the kubectl / oc CLI tool. For example: USD oc get clusterserviceversions -n <your-namespace> Using the UI, oc edit , or another method, modify the Red Hat Quay ClusterServiceVersion to include the environment variables outlined above to point to the override images: JSONPath : spec.install.spec.deployments[0].spec.template.spec.containers[0].env - name: RELATED_IMAGE_COMPONENT_QUAY value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d - name: RELATED_IMAGE_COMPONENT_CLAIR value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6 - name: RELATED_IMAGE_COMPONENT_POSTGRES value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33 - name: RELATED_IMAGE_COMPONENT_REDIS value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542 Note This is done at the Operator level, so every QuayRegistry will be deployed using these same overrides. 4.7. AWS S3 CloudFront Note Currently, using AWS S3 CloudFront is not supported on IBM Power and IBM Z. Use the following procedure if you are using AWS S3 Cloudfront for your backend registry storage. Procedure Enter the following command to specify the registry key: USD oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle
[ "oc get nodes", "NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready worker 3h22m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583", "oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra=", "oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra=", "oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra=", "oc get nodes", "NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 4h15m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba4558", "oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule", "oc annotate namespace <namespace> openshift.io/node-selector='node-role.kubernetes.io/infra='", "namespace/<namespace> annotated", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-registry-clair-app-5744dd64c9-9d5jt 1/1 Running 0 173m 10.130.4.13 stevsmit-quay-ocp-tes-5gwws-worker-c-6xkn7 <none> <none> example-registry-clair-app-5744dd64c9-fg86n 1/1 Running 6 (3h21m ago) 3h24m 10.131.0.91 stevsmit-quay-ocp-tes-5gwws-worker-c-dnhdp <none> <none> example-registry-clair-postgres-845b47cd88-vdchz 1/1 Running 0 3h21m 10.130.4.10 stevsmit-quay-ocp-tes-5gwws-worker-c-6xkn7 <none> <none> example-registry-quay-app-64cbc5bcf-8zvgc 1/1 Running 1 (3h24m ago) 3h24m 10.130.2.12 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none> example-registry-quay-app-64cbc5bcf-pvlz6 1/1 Running 0 3h24m 10.129.4.10 stevsmit-quay-ocp-tes-5gwws-worker-b-fjhz4 <none> <none> example-registry-quay-app-upgrade-8gspn 0/1 Completed 0 3h24m 10.130.2.10 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none> example-registry-quay-database-784d78b6f8-2vkml 1/1 Running 0 3h24m 10.131.4.10 stevsmit-quay-ocp-tes-5gwws-worker-c-2frtg <none> <none> example-registry-quay-mirror-d5874d8dc-fmknp 1/1 Running 0 3h24m 10.129.4.9 stevsmit-quay-ocp-tes-5gwws-worker-b-fjhz4 <none> <none> example-registry-quay-mirror-d5874d8dc-t4mff 1/1 Running 0 3h24m 10.129.2.19 stevsmit-quay-ocp-tes-5gwws-worker-a-k7w86 <none> <none> example-registry-quay-redis-79848898cb-6qf5x 1/1 Running 0 3h24m 10.130.2.11 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none>", "oc delete pods --selector quay-operator/quayregistry=example-registry -n quay-enterprise", "pod \"example-registry-clair-app-5744dd64c9-9d5jt\" deleted pod \"example-registry-clair-app-5744dd64c9-fg86n\" deleted pod \"example-registry-clair-postgres-845b47cd88-vdchz\" deleted pod \"example-registry-quay-app-64cbc5bcf-8zvgc\" deleted pod \"example-registry-quay-app-64cbc5bcf-pvlz6\" deleted pod \"example-registry-quay-app-upgrade-8gspn\" deleted pod \"example-registry-quay-database-784d78b6f8-2vkml\" deleted pod \"example-registry-quay-mirror-d5874d8dc-fmknp\" deleted pod \"example-registry-quay-mirror-d5874d8dc-t4mff\" deleted pod \"example-registry-quay-redis-79848898cb-6qf5x\" deleted", "oc get pods -n quay-registry -o wide", "NAME READY STATUS RESTARTS AGE IP NODE quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 30s 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal", "oc get pods -n quay-registry -o wide", "NAME READY STATUS RESTARTS AGE IP NODE example-registry-clair-app-789d6d984d-gpbwd 1/1 Running 1 5m57s 10.130.2.80 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-clair-postgres-7c8697f5-zkzht 1/1 Running 0 4m53s 10.129.2.19 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-app-56dd755b6d-glbf7 1/1 Running 1 5m57s 10.129.2.17 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-database-8dc7cfd69-dr2cc 1/1 Running 0 5m43s 10.129.2.18 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-mirror-78df886bcc-v75p9 1/1 Running 0 5m16s 10.131.0.24 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-postgres-init-8s8g9 0/1 Completed 0 5m54s 10.130.2.79 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-quay-redis-5688ddcdb6-ndp4t 1/1 Running 0 5m56s 10.130.2.78 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 22m 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal", "oc get clusterserviceversions -n <your-namespace>", "- name: RELATED_IMAGE_COMPONENT_QUAY value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d - name: RELATED_IMAGE_COMPONENT_CLAIR value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6 - name: RELATED_IMAGE_COMPONENT_POSTGRES value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33 - name: RELATED_IMAGE_COMPONENT_REDIS value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542", "oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_operator_features/operator-deploy-infrastructure
5.4. Resource Meta Options
5.4. Resource Meta Options In addition to the resource-specific parameters, you can configure additional resource options for any resource. These options are used by the cluster to decide how your resource should behave. Table 5.3, "Resource Meta Options" describes this options. Table 5.3. Resource Meta Options Field Default Description priority 0 If not all resources can be active, the cluster will stop lower priority resources in order to keep higher priority ones active. target-role Started What state should the cluster attempt to keep this resource in? Allowed values: * Stopped - Force the resource to be stopped * Started - Allow the resource to be started (In the case of multistate resources, they will not promoted to master) * Master - Allow the resource to be started and, if appropriate, promoted is-managed true Is the cluster allowed to start and stop the resource? Allowed values: true , false resource-stickiness 0 Value to indicate how much the resource prefers to stay where it is. requires Calculated Indicates under what conditions the resource can be started. Defaults to fencing except under the conditions noted below. Possible values: * nothing - The cluster can always start the resource. * quorum - The cluster can only start this resource if a majority of the configured nodes are active. This is the default value if stonith-enabled is false or the resource's standard is stonith . * fencing - The cluster can only start this resource if a majority of the configured nodes are active and any failed or unknown nodes have been powered off. * unfencing - The cluster can only start this resource if a majority of the configured nodes are active and any failed or unknown nodes have been powered off and only on nodes that have been unfenced . This is the default value if the provides=unfencing stonith meta option has been set for a fencing device. For information on the provides=unfencing stonith meta option, see Section 4.5, "Configuring Storage-Based Fence Devices with unfencing" . migration-threshold INFINITY (disabled) How many failures may occur for this resource on a node, before this node is marked ineligible to host this resource. For information on configuring the migration-threshold option, refer to Section 7.2, "Moving Resources Due to Failure" . failure-timeout 0 (disabled) Used in conjunction with the migration-threshold option, indicates how many seconds to wait before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. For information on configuring the failure-timeout option, refer to Section 7.2, "Moving Resources Due to Failure" . multiple-active stop_start What should the cluster do if it ever finds the resource active on more than one node. Allowed values: * block - mark the resource as unmanaged * stop_only - stop all active instances and leave them that way * stop_start - stop all active instances and start the resource in one location only To change the default value of a resource option, use the following command. For example, the following command resets the default value of resource-stickiness to 100. Omitting the options parameter from the pcs resource defaults displays a list of currently configured default values for resource options. The following example shows the output of this command after you have reset the default value of resource-stickiness to 100. Whether you have reset the default value of a resource meta option or not, you can set a resource option for a particular resource to a value other than the default when you create the resource. The following shows the format of the pcs resource create command you use when specifying a value for a resource meta option. For example, the following command creates a resource with a resource-stickiness value of 50. You can also set the value of a resource meta option for an existing resource, group, cloned resource, or master resource with the following command. In the following example, there is an existing resource named dummy_resource . This command sets the failure-timeout meta option to 20 seconds, so that the resource can attempt to restart on the same node in 20 seconds. After executing this command, you can display the values for the resource to verity that failure-timeout=20s is set. For information on resource clone meta options, see Section 8.1, "Resource Clones" . For information on resource master meta options, see Section 8.2, "Multi-State Resources: Resources That Have Multiple Modes" .
[ "pcs resource defaults options", "pcs resource defaults resource-stickiness=100", "pcs resource defaults resource-stickiness:100", "pcs resource create resource_id standard:provider:type | type [ resource options ] [meta meta_options ...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 meta resource-stickiness=50", "pcs resource meta resource_id | group_id | clone_id | master_id meta_options", "pcs resource meta dummy_resource failure-timeout=20s", "pcs resource show dummy_resource Resource: dummy_resource (class=ocf provider=heartbeat type=Dummy) Meta Attrs: failure-timeout=20s Operations: start interval=0s timeout=20 (dummy_resource-start-timeout-20) stop interval=0s timeout=20 (dummy_resource-stop-timeout-20) monitor interval=10 timeout=20 (dummy_resource-monitor-interval-10)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-resourceopts-HAAR
Chapter 11. DNSRecord [ingress.operator.openshift.io/v1]
Chapter 11. DNSRecord [ingress.operator.openshift.io/v1] Description DNSRecord is a DNS record managed in the zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Cluster admin manipulation of this resource is not supported. This resource is only for internal communication of OpenShift operators. If DNSManagementPolicy is "Unmanaged", the operator will not be responsible for managing the DNS records on the cloud provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the dnsRecord. status object status is the most recently observed status of the dnsRecord. 11.1.1. .spec Description spec is the specification of the desired behavior of the dnsRecord. Type object Required dnsManagementPolicy dnsName recordTTL recordType targets Property Type Description dnsManagementPolicy string dnsManagementPolicy denotes the current policy applied on the DNS record. Records that have policy set as "Unmanaged" are ignored by the ingress operator. This means that the DNS record on the cloud provider is not managed by the operator, and the "Published" status condition will be updated to "Unknown" status, since it is externally managed. Any existing record on the cloud provider can be deleted at the discretion of the cluster admin. This field defaults to Managed. Valid values are "Managed" and "Unmanaged". dnsName string dnsName is the hostname of the DNS record recordTTL integer recordTTL is the record TTL in seconds. If zero, the default is 30. RecordTTL will not be used in AWS regions Alias targets, but will be used in CNAME targets, per AWS API contract. recordType string recordType is the DNS record type. For example, "A" or "CNAME". targets array (string) targets are record targets. 11.1.2. .status Description status is the most recently observed status of the dnsRecord. Type object Property Type Description observedGeneration integer observedGeneration is the most recently observed generation of the DNSRecord. When the DNSRecord is updated, the controller updates the corresponding record in each managed zone. If an update for a particular zone fails, that failure is recorded in the status condition for the zone so that the controller can determine that it needs to retry the update for that specific zone. zones array zones are the status of the record in each zone. zones[] object DNSZoneStatus is the status of a record within a specific zone. 11.1.3. .status.zones Description zones are the status of the record in each zone. Type array 11.1.4. .status.zones[] Description DNSZoneStatus is the status of a record within a specific zone. Type object Property Type Description conditions array conditions are any conditions associated with the record in the zone. If publishing the record succeeds, the "Published" condition will be set with status "True" and upon failure it will be set to "False" along with the reason and message describing the cause of the failure. conditions[] object DNSZoneCondition is just the standard condition fields. dnsZone object dnsZone is the zone where the record is published. 11.1.5. .status.zones[].conditions Description conditions are any conditions associated with the record in the zone. If publishing the record succeeds, the "Published" condition will be set with status "True" and upon failure it will be set to "False" along with the reason and message describing the cause of the failure. Type array 11.1.6. .status.zones[].conditions[] Description DNSZoneCondition is just the standard condition fields. Type object Required status type Property Type Description lastTransitionTime string message string reason string status string type string 11.1.7. .status.zones[].dnsZone Description dnsZone is the zone where the record is published. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 11.2. API endpoints The following API endpoints are available: /apis/ingress.operator.openshift.io/v1/dnsrecords GET : list objects of kind DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords DELETE : delete collection of DNSRecord GET : list objects of kind DNSRecord POST : create a DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name} DELETE : delete a DNSRecord GET : read the specified DNSRecord PATCH : partially update the specified DNSRecord PUT : replace the specified DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name}/status GET : read status of the specified DNSRecord PATCH : partially update status of the specified DNSRecord PUT : replace status of the specified DNSRecord 11.2.1. /apis/ingress.operator.openshift.io/v1/dnsrecords Table 11.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind DNSRecord Table 11.2. HTTP responses HTTP code Reponse body 200 - OK DNSRecordList schema 401 - Unauthorized Empty 11.2.2. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords Table 11.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 11.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of DNSRecord Table 11.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DNSRecord Table 11.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.8. HTTP responses HTTP code Reponse body 200 - OK DNSRecordList schema 401 - Unauthorized Empty HTTP method POST Description create a DNSRecord Table 11.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.10. Body parameters Parameter Type Description body DNSRecord schema Table 11.11. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 202 - Accepted DNSRecord schema 401 - Unauthorized Empty 11.2.3. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name} Table 11.12. Global path parameters Parameter Type Description name string name of the DNSRecord namespace string object name and auth scope, such as for teams and projects Table 11.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a DNSRecord Table 11.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 11.15. Body parameters Parameter Type Description body DeleteOptions schema Table 11.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DNSRecord Table 11.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.18. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DNSRecord Table 11.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.20. Body parameters Parameter Type Description body Patch schema Table 11.21. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DNSRecord Table 11.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.23. Body parameters Parameter Type Description body DNSRecord schema Table 11.24. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 401 - Unauthorized Empty 11.2.4. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name}/status Table 11.25. Global path parameters Parameter Type Description name string name of the DNSRecord namespace string object name and auth scope, such as for teams and projects Table 11.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified DNSRecord Table 11.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.28. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DNSRecord Table 11.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.30. Body parameters Parameter Type Description body Patch schema Table 11.31. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DNSRecord Table 11.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.33. Body parameters Parameter Type Description body DNSRecord schema Table 11.34. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operator_apis/dnsrecord-ingress-operator-openshift-io-v1
Part I. Packaging and deploying a Red Hat Process Automation Manager project
Part I. Packaging and deploying a Red Hat Process Automation Manager project As a business rules developer, you must build and deploy a developed Red Hat Process Automation Manager project to a KIE Server in order to begin using the services you have created in Red Hat Process Automation Manager. You can develop and deploy a project from Business Central, from an independent Maven project, from a Java application, or using a combination of various platforms. For example, you can develop a project in Business Central and deploy it using the KIE Server REST API, or develop a project in Maven configured with Business Central and deploy it using Business Central. Prerequisites The project to be deployed has been developed and tested. For projects in Business Central, consider using test scenarios to test the assets in your project. For example, see Testing a decision service using test scenarios .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/assembly-packaging-deploying
Chapter 1. New features
Chapter 1. New features This section highlights new features in Red Hat Developer Hub 1.3. 1.1. Customizing the deployment by using the custom resource With this update, when deploying Red Hat Developer Hub by using the operator, you can configure the Developer Hub Deployment resource. The Developer Hub Operator Custom Resource Definition (CRD) API Version has been updated to rhdh.redhat.com/v1alpha2 . This CRD exposes a generic spec.deployment.patch field, which allows you to patch the Developer Hub Deployment resource. 1.2. Using nested conditions in RBAC conditional policies With this update, as a Developer Hub administrator, you can create and edit nested conditions in RBAC conditional policies by using the Developer Hub web UI. 1.3. Persisting the audit log With this update, you can persist the audit log: You can send Red Hat Developer Hub audit logs to a rotating file. You can send logs to a locked down file with append only rights. When using the Helm chart, Developer Hub writes logs to persistent volumes. 1.4. Allow Dynamic Configuration of Keycloak User/Group Transformers With this update, you can provide transformer functions for users and groups to mutate entity parameters from Keycloak before their ingestion into the catalog. This can be done by creating a new backend module and using the added keycloakTransformerExtensionPoint. 1.5. Expose extension points for the keycloak-backend plugin With this update, you can provide transformer functions for user/group to mutate the entity from Keycloak before their ingestion into the catalog with the new Backstage backend. Procedure Create a backend module. Provide the custom transformers to the keycloakTransformerExtensionPoint extension point exported by the package. == All public endpoints in core and plugins have OpenAPI specs With this update, OpenAPI Specs are available for all components, including the rbac-backend plugin. 1.6. RBAC Backend plugin module support With this update, Developer Hub can load roles and permissions into the RBAC Backend plugin through the use of extension points with the help of a plugin module. 1.7. Force catalog ingestion for production users By default, it is now required for the user entity to exist in the software catalog to allow sign in. This is required for production ready deployments since identities need to exist and originate from a trusted source (i.e. the Identity Provider) in order for security controls such as RBAC and Audit logging to be effective. To bypass this, enable the dangerouslySignInWithoutUserInCatalog configuration that allows sign in without the user being in the catalog. Enabling this option is dangerous as it might allow unauthorized users to gain access. 1.8. RBAC UI enhancements With this update, the RBAC UI has been improved: The Create role form and the Role overview page display the total number of conditional rules configured. The Role list page displays accessible plugins. 1.9. Updated Backstage version With this update, Backstage was updated to version 1.29.2. Additional resources: Backstage 1.27 release notes Backstage 1.27 changelog Backstage 1.28 release notes Backstage 1.28 changelog Backstage 1.29 release notes Backstage 1.29 changelog RHIDP-2794 RHIDP-2847 RHIDP-2796 == Authenticating with Microsoft Azure The Microsoft Azure Authentication provider is now enterprise ready. To enable this, enhancements and bug fixes were made to improve the authentication and entity ingestion process. Note, the existence of user entity in the catalog is now enforced. 1.10. Deploying on OpenShift Dedicated on Google Cloud Provider (GCP) Before this update, there was no automated process to deploy Developer Hub on OpenShift Dedicated (OSD) on Google Cloud Platform (GCP). With this update, you can install Red Hat Developer Hub on OpenShift Dedicated (OSD) on Google Cloud Platform (GCP) by using either Red Hat Developer Hub Operator or Red Hat Developer Hub Helm Chart. 1.11. Visualize Virtual Machine nodes on the Topology plugin With this update, you can visualize the Virtual Machine nodes deployed on the cluster through the Topology plugin. 1.12. Customizing the Home page With this update, you can customize the Home page in Red Hat Developer Hub by passing the data into the app-config.yaml file as a proxy. It is now possible to add, reorganize, and remove cards, including the search bar, quick access, headline, markdown, placeholder, catalog starred entities and featured docs that appear based on the plugins you have installed and enabled. 1.13. Customizing the main navigation sidebar This update introduces a configurable and customizable main navigation sidebar in Developer Hub, offering administrators greater control over the navigation structure. Previously, the sidebar was hard-coded with limited flexibility, and dynamic plugins could only contribute menu items without control over their order or structure. With this feature, administrators can now configure the order of navigation items, create nested sub-navigation, and provide users with a more organized and intuitive interface. This enhancement improves user experience and efficiency by allowing a more tailored navigation setup. Backward compatibility is maintained, ensuring existing dynamic plugin menu item contributions remain functional. A default configuration is provided, along with example configurations, including one with an external dynamic plugin. Documentation has been updated to guide developers on customizing the navigation. 1.14. Surfacing Catalog Processing Errors to Users With this update, the @backstage/plugin-catalog-backend-module-logs plugin has been made available as a dynamic plugin to help surface catalog errors into the logs. This dynamic plugin is disabled by default. 1.15. Configuring conditional policies by using external files With this release, you can configure conditional policies in Developer Hub using external files. Additionally, Developer Hub supports conditional policy aliases, which are dynamically substituted with the appropriate values during policy evaluation. For more information, see Configuring conditional policies . 1.16. Restarting Red Hat Developer Hub faster Before this update, it took a long time for Developer Hub to restart because Developer Hub bootstraps all dynamic plugins from zero with every restart. With this update, Developer Hub is using persisted volumes for the dynamic plugins. Therefore, Developer Hub restarts faster. 1.17. Monitoring active users on Developer Hub With this update, you can monitor active users on Developer Hub using the licensed-users-info-backend plugin. This plugin provides statistical data on logged-in users through the Web UI or REST API endpoints. For more information, see Authorization . 1.18. Loading a custom Backstage theme from a dynamic plugin With this update, you can load a custom Backstage theme from a dynamic plugin. Procedure Export a theme provider function in the dynamic plugin, such as: import { lightTheme } from &#39;./lightTheme&#39;; // some custom theme import { UnifiedThemeProvider } from &#39;@backstage/theme&#39;; export const lightThemeProvider = ({ children }: { children: ReactNode }) =&gt; ( &lt;UnifiedThemeProvider theme={lightTheme} children={children} /&gt; ); Configure Developer Hub to load the them in the UI by using the new themes configuration field: dynamicPlugins: frontend: example.my-custom-theme-plugin: themes: - id: light # &lt;1&gt; title: Light variant: light icon: someIconReference importName: lightThemeProvider <1> Set your theme id. Optionally, override the default Developer Hub themes specifying following id value: light overrides the default light theme and dark overrides the default dark theme. Verification The theme is available in the "Settings" page. This update also introduced the ability to override core API service factories from a dynamic plugin, which can be helpful for more specialized use cases such as providing a custom ScmAuth configuration for the Developer Hub frontend. 1.19. Manage concurrent writing on install dynamic plugins Previously, running multi-replica RHDH with a Persistent Volume for the Dynamic Plugins cache was not possible due to potential write conflicts. This update mitigates that risk
[ "import { lightTheme } from &#39;./lightTheme&#39;; // some custom theme import { UnifiedThemeProvider } from &#39;@backstage/theme&#39;; export const lightThemeProvider = ({ children }: { children: ReactNode }) =&gt; ( &lt;UnifiedThemeProvider theme={lightTheme} children={children} /&gt; );", "dynamicPlugins: frontend: example.my-custom-theme-plugin: themes: - id: light # &lt;1&gt; title: Light variant: light icon: someIconReference importName: lightThemeProvider" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/release_notes/new-features
7.108. libreoffice
7.108. libreoffice 7.108.1. RHSA-2015:1458 - Moderate: libreoffice security, bug fix, and enhancement update Updated libreoffice packages that fix one security issue, several bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link in the References section. LibreOffice is an open source, community-developed office productivity suite. It includes key desktop applications, such as a word processor, a spreadsheet, a presentation manager, a formula editor, and a drawing program. LibreOffice replaces OpenOffice and provides a similar but enhanced and extended office suite. Security Fix CVE-2015-1774 A flaw was found in the way the LibreOffice HWP (Hangul Word Processor) file filter processed certain HWP documents. An attacker able to trick a user into opening a specially crafted HWP document could possibly use this flaw to execute arbitrary code with the privileges of the user opening that document. The libreoffice packages have been upgraded to upstream version 4.2.8.2, which provides a number of bug fixes and enhancements over the version. (BZ#1150048) Bug Fix BZ# 1150048 OpenXML interoperability has been improved. * This update adds additional statistics functions to the Calc application, thus improving interoperability with Microsoft Excel and its "Analysis ToolPak" add-in. * Various performance improvements have been implemented in Calc. * This update adds new import filters for importing files from the Appple Keynote and Abiword applications. * The export filter for the MathML markup language has been improved. * This update adds a new start screen that includes thumbnails of recently opened documents. * A visual clue is now displayed in the Slide Sorter window for slides with transitions or animations. * This update improves trend lines in charts. * LibreOffice now supports BCP 47 language tags. For a complete list of bug fixes and enhancements provided by this rebase, see the libreoffice change log linked from the References section. Users of libreoffice are advised to upgrade to these updated packages, which correct these issues and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-libreoffice
Part I. Overview of Red Hat Identity Management
Part I. Overview of Red Hat Identity Management This part explains the purpose of Red Hat Identity Management . It also provides basic information about the Identity Management domain, including the client and server machines that are part of the domain.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/p.overview
Chapter 4. Configuring multi-architecture compute machines on an OpenShift cluster
Chapter 4. Configuring multi-architecture compute machines on an OpenShift cluster 4.1. About clusters with multi-architecture compute machines An OpenShift Container Platform cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. Clusters with multi-architecture compute machines are available only on Amazon Web Services (AWS) or Microsoft Azure installer-provisioned infrastructures and bare metal, IBM Power(R), and IBM Z(R) user-provisioned infrastructures with x86_64 control plane machines. Note When there are nodes with multiple architectures in your cluster, the architecture of your image must be consistent with the architecture of the node. You need to ensure that the pod is assigned to the node with the appropriate architecture and that it matches the image architecture. For more information on assigning pods to nodes, see Assigning pods to nodes . Important The Cluster Samples Operator is not supported on clusters with multi-architecture compute machines. Your cluster can be created without this capability. For more information, see Enabling cluster capabilities For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see Migrating to a cluster with multi-architecture compute machines . 4.1.1. Configuring your cluster with multi-architecture compute machines To create a cluster with multi-architecture compute machines for various platforms, you can use the documentation in the following sections: Creating a cluster with multi-architecture compute machines on Azure Creating a cluster with multi-architecture compute machines on AWS Creating a cluster with multi-architecture compute machines on GCP Creating a cluster with multi-architecture compute machines on bare metal Creating a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE with z/VM Creating a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE with RHEL KVM Creating a cluster with multi-architecture compute machines on IBM Power(R) Important Autoscaling from zero is currently not supported on Google Cloud Platform (GCP). 4.2. Creating a cluster with multi-architecture compute machine on Azure To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see Installing a cluster on Azure with customizations . You can then add an ARM64 compute machine set to your cluster to create a cluster with multi-architecture compute machines. The following procedures explain how to generate an ARM64 boot image and create an Azure compute machine set that uses the ARM64 boot image. This adds ARM64 compute nodes to your cluster and deploys the amount of ARM64 virtual machines (VM) that you need. 4.2.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ) Procedure You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, then your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, then your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 4.2.2. Creating an ARM64 boot image using the Azure image gallery The following procedure describes how to manually generate an ARM64 boot image. Prerequisites You installed the Azure CLI ( az ). You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary. Procedure Log in to your Azure account: USD az login Create a storage account and upload the arm64 virtual hard disk (VHD) to your storage account. The OpenShift Container Platform installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group: USD az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1 1 The westus object is an example region. Create a storage container using the storage account you generated: USD az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} You must use the OpenShift Container Platform installation program JSON file to extract the URL and aarch64 VHD name: Extract the URL field and set it to RHCOS_VHD_ORIGIN_URL as the file name by running the following command: USD RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url') Extract the aarch64 VHD name and set it to BLOB_NAME as the file name by running the following command: USD BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container with the following commands: USD end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'` USD sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv` Copy the RHCOS VHD into the storage container: USD az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token "USDsas" \ --source-uri "USD{RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "USD{BLOB_NAME}" --destination-container USD{CONTAINER_NAME} You can check the status of the copying process with the following command: USD az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy Example output { "completionTime": null, "destinationSnapshot": null, "id": "1fd97630-03ca-489a-8c4e-cfe839c9627d", "incrementalCopy": null, "progress": "17179869696/17179869696", "source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd", "status": "success", 1 "statusDescription": null } 1 If the status parameter displays the success object, the copying process is complete. Create an image gallery using the following command: USD az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} Use the image gallery to create an image definition. In the following example command, rhcos-arm64 is the name of the image definition. USD az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2 To get the URL of the VHD and set it to RHCOS_VHD_URL as the file name, run the following command: USD RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n "USD{BLOB_NAME}" -o tsv) Use the RHCOS_VHD_URL file, your storage account, resource group, and image gallery to create an image version. In the following example, 1.0.0 is the image version. USD az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL} Your arm64 boot image is now generated. You can access the ID of your image with the following command: USD az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0 The following example image ID is used in the recourseID parameter of the compute machine set: Example resourceID /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 4.2.3. Adding a multi-architecture compute machine set to your cluster To add ARM64 compute nodes to your cluster, you must create an Azure compute machine set that uses the ARM64 boot image. To create your own custom compute machine set on Azure, see "Creating a compute machine set on Azure". Prerequisites You installed the OpenShift CLI ( oc ). Procedure Create a compute machine set and modify the resourceID and vmSize parameters with the following command. This compute machine set will control the arm64 worker nodes in your cluster: USD oc create -f arm64-machine-set-0.yaml Sample YAML compute machine set with arm64 boot image apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-arm64-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: "" version: "" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: "<zone>" 1 Set the resourceID parameter to the arm64 boot image. 2 Set the vmSize parameter to the instance type used in your installation. Some example instance types are Standard_D4ps_v5 or D8ps . Verification Verify that the new ARM64 machines are running by entering the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-arm64-machine-set-0 2 2 2 2 10m You can check that the nodes are ready and scheduable with the following command: USD oc get nodes Additional resources Creating a compute machine set on Azure 4.3. Creating a cluster with multi-architecture compute machines on AWS To create an AWS cluster with multi-architecture compute machines, you must first create a single-architecture AWS installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, refer to Installing a cluster on AWS with customizations . You can then add a ARM64 compute machine set to your AWS cluster. 4.3.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ) Procedure You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, then your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, then your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 4.3.2. Adding an ARM64 compute machine set to your cluster To configure a cluster with multi-architecture compute machines, you must create a AWS ARM64 compute machine set. This adds ARM64 compute nodes to your cluster so that your cluster has multi-architecture compute machines. Prerequisites You installed the OpenShift CLI ( oc ). You used the installation program to create an AMD64 single-architecture AWS cluster with the multi-architecture installer binary. Procedure Create and modify a compute machine set, this will control the ARM64 compute nodes in your cluster. USD oc create -f aws-arm64-machine-set-0.yaml Sample YAML compute machine set to deploy an ARM64 compute node apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-aws-arm64-machine-set-0 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 5 machine.openshift.io/cluster-api-machine-type: <role> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: ami: id: ami-02a574449d4f4d280 8 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 9 instanceType: m6g.xlarge 10 kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned - name: <custom_tag_name> value: <custom_tag_value> userDataSecret: name: worker-user-data 1 2 3 9 13 14 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 4 7 Specify the infrastructure ID, role node label, and zone. 5 6 Specify the role node label to add. 8 Specify an ARM64 supported Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. USD oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.<arch>.images.aws.regions."<region>".image' 10 Specify an ARM64 supported machine type. For more information, refer to "Tested instance types for AWS 64-bit ARM" 11 Specify the zone, for example us-east-1a . Ensure that the zone you select offers 64-bit ARM machines. 12 Specify the region, for example, us-east-1 . Ensure that the zone you select offers 64-bit ARM machines. Verification View the list of compute machine sets by entering the following command: USD oc get machineset -n openshift-machine-api You can then see your created ARM64 machine set. Example output NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-arm64-machine-set-0 2 2 2 2 10m You can check that the nodes are ready and scheduable with the following command: USD oc get nodes Additional resources Tested instance types for AWS 64-bit ARM 4.4. Creating a cluster with multi-architecture compute machines on GCP To create a Google Cloud Platform (GCP) cluster with multi-architecture compute machines, you must first create a single-architecture GCP installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, refer to Installing a cluster on GCP with customizations . You can then add ARM64 compute machines sets to your GCP cluster. Note Secure booting is currently not supported on ARM64 machines for GCP 4.4.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ) Procedure You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, then your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, then your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 4.4.2. Adding an ARM64 compute machine set to your GCP cluster To configure a cluster with multi-architecture compute machines, you must create a GCP ARM64 compute machine set. This adds ARM64 compute nodes to your cluster. Prerequisites You installed the OpenShift CLI ( oc ). You used the installation program to create an AMD64 single-architecture AWS cluster with the multi-architecture installer binary. Procedure Create and modify a compute machine set, this controls the ARM64 compute nodes in your cluster: USD oc create -f gcp-arm64-machine-set-0.yaml Sample GCP YAML compute machine set to deploy an ARM64 compute node apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 5 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 6 region: us-central1 7 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the role node label to add. 3 Specify the path to the image that is used in current compute machine sets. You need the project and image name for your path to image. To access the project and image name, run the following command: USD oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.aarch64.images.gcp' Example output "gcp": { "release": "415.92.202309142014-0", "project": "rhcos-cloud", "name": "rhcos-415-92-202309142014-0-gcp-aarch64" } Use the project and name parameters from the output to create the path to image field in your machine set. The path to the image should follow the following format: USD projects/<project>/global/images/<image_name> 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 Specify an ARM64 supported machine type. For more information, refer to Tested instance types for GCP on 64-bit ARM infrastructures in "Additional resources". 6 Specify the name of the GCP project that you use for your cluster. 7 Specify the region, for example, us-central1 . Ensure that the zone you select offers 64-bit ARM machines. Verification View the list of compute machine sets by entering the following command: USD oc get machineset -n openshift-machine-api You can then see your created ARM64 machine set. Example output NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-arm64-machine-set-0 2 2 2 2 10m You can check that the nodes are ready and scheduable with the following command: USD oc get nodes Additional resources Tested instance types for GCP on 64-bit ARM infrastructures 4.5. Creating a cluster with multi-architecture compute machine on bare metal To create a cluster with multi-architecture compute machines on bare metal, you must have an existing single-architecture bare metal cluster. For more information on bare metal installations, see Installing a user provisioned cluster on bare metal . You can then add 64-bit ARM compute machines to your OpenShift Container Platform cluster on bare metal. Before you can add 64-bit ARM nodes to your bare metal cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This will allow you to add ARM64 nodes to your bare metal cluster and deploy a cluster with multi-architecture compute machines. 4.5.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ) Procedure You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, then your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, then your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 4.5.2. Creating RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. You must have the OpenShift CLI ( oc ) installed. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node: USD curl -k http://<HTTP_server>/worker.ign You can access the ISO image for booting your new machine by running to following command: RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location') Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 4.5.3. Creating RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and GRUB as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 4.5.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.6. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with z/VM To create a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE ( s390x ) with z/VM, you must have an existing single-architecture x86_64 cluster. You can then add s390x compute machines to your OpenShift Container Platform cluster. Before you can add s390x nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using a z/VM instance. This will allow you to add s390x nodes to your cluster and deploy a cluster with multi-architecture compute machines. 4.6.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ) Procedure You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, then your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, then your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 4.6.2. Creating RHCOS machines on IBM Z with z/VM You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines running on IBM Z(R) with z/VM and attach them to your existing cluster. Prerequisites You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes. You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Disable UDP aggregation. Currently, UDP aggregation is not supported on IBM Z(R) and is not automatically deactivated on multi-architecture compute clusters with an x86_64 control plane and additional s390x compute machines. To ensure that the addtional compute nodes are added to the cluster correctly, you must manually disable UDP aggregation. Create a YAML file udp-aggregation-config.yaml with the following content: apiVersion: v1 kind: ConfigMap data: disable-udp-aggregation: "true" metadata: name: udp-aggregation-config namespace: openshift-network-operator Create the ConfigMap resource by running the following command: USD oc create -f udp-aggregation-config.yaml Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node: USD curl -k http://<HTTP_server>/worker.ign Download the RHEL live kernel , initramfs , and rootfs files by running the following commands: USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location') Move the downloaded RHEL live kernel , initramfs , and rootfs files to an HTTP or HTTPS server that is accessible from the z/VM guest you want to add. Create a parameter file for the z/VM guest. The following parameters are specific for the virtual machine: Optional: To specify a static IP address, add an ip= parameter with the following entries, with each separated by a colon: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. The value none . For coreos.inst.ignition_url= , specify the URL to the worker.ign file. Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. The following is an example parameter file, additional-worker-dasd.parm : rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 Write all options in the parameter file as a single line and make sure that you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing, repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/sda . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file, additional-worker-fcp.parm for a worker node with multipathing: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/sda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure that you have no newline characters. Transfer the initramfs , kernel , parameter files, and RHCOS images to z/VM, for example, by using FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine. See PUNCH in IBM(R) Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader by running the following command: See IPL in IBM(R) Documentation. 4.6.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.7. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM To create a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE ( s390x ) with RHEL KVM, you must have an existing single-architecture x86_64 cluster. You can then add s390x compute machines to your OpenShift Container Platform cluster. Before you can add s390x nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using a RHEL KVM instance. This will allow you to add s390x nodes to your cluster and deploy a cluster with multi-architecture compute machines. 4.7.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ) Procedure You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, then your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, then your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 4.7.2. Creating RHCOS machines using virt-install You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using virt-install . Prerequisites You have at least one LPAR running on RHEL 8.7 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Disable UDP aggregation. Currently, UDP aggregation is not supported on IBM Z(R) and is not automatically deactivated on multi-architecture compute clusters with an x86_64 control plane and additional s390x compute machines. To ensure that the addtional compute nodes are added to the cluster correctly, you must manually disable UDP aggregation. Create a YAML file udp-aggregation-config.yaml with the following content: apiVersion: v1 kind: ConfigMap data: disable-udp-aggregation: "true" metadata: name: udp-aggregation-config namespace: openshift-network-operator Create the ConfigMap resource by running the following command: USD oc create -f udp-aggregation-config.yaml Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node: USD curl -k http://<HTTP_server>/worker.ign Download the RHEL live kernel , initramfs , and rootfs files by running the following commands: USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location') Move the downloaded RHEL live kernel , initramfs and rootfs files to an HTTP or HTTPS server before you launch virt-install . Create the new KVM guest nodes using the RHEL kernel , initramfs , and Ignition files; the new disk image; and adjusted parm line arguments. USD virt-install \ --connect qemu:///system \ --name <vm_name> \ --autostart \ --os-variant rhel9.2 \ 1 --cpu host \ --vcpus <vcpus> \ --memory <memory_mb> \ --disk <vm_name>.qcow2,size=<image_size> \ --network network=<virt_network_parm> \ --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ 2 --extra-args "rd.neednet=1" \ --extra-args "coreos.inst.install_dev=/dev/vda" \ --extra-args "coreos.inst.ignition_url=<worker_ign>" \ 3 --extra-args "coreos.live.rootfs_url=<rhcos_rootfs>" \ 4 --extra-args "ip=<ip>::<default_gateway>:<subnet_mask_length>:<hostname>::none:<MTU>" \ 5 --extra-args "nameserver=<dns>" \ --extra-args "console=ttysclp0" \ --noautoconsole \ --wait 1 For os-variant , specify the RHEL version for the RHCOS compute machine. rhel9.2 is the recommended version. To query the supported RHEL version of your operating system, run the following command: USD osinfo-query os -f short-id Note The os-variant is case sensitive. 2 For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. 3 For coreos.inst.ignition_url= , specify the worker.ign Ignition file for the machine role. Only HTTP and HTTPS protocols are supported. 4 For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 5 Optional: For hostname , specify the fully qualified hostname of the client machine. Note If you are using HAProxy as a load balancer, update your HAProxy rules for ingress-router-443 and ingress-router-80 in the /etc/haproxy/haproxy.cfg configuration file. Continue to create more compute machines for your cluster. 4.7.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.8. Creating a cluster with multi-architecture compute machines on IBM Power To create a cluster with multi-architecture compute machines on IBM Power(R) ( ppc64le ), you must have an existing single-architecture ( x86_64 ) cluster. You can then add ppc64le compute machines to your OpenShift Container Platform cluster. Important Before you can add ppc64le nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This will allow you to add ppc64le nodes to your cluster and deploy a cluster with multi-architecture compute machines. 4.8.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ) Note When using multiple architectures, hosts for OpenShift Container Platform nodes must share the same storage layer. If they do not have the same storage layer, use a storage provider such as nfs-provisioner . Note You should limit the number of network hops between the compute and control plane as much as possible. Procedure You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, then your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, then your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 4.8.2. Creating RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. You must have the OpenShift CLI ( oc ) installed. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node: USD curl -k http://<HTTP_server>/worker.ign You can access the ISO image for booting your new machine by running to following command: RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location') Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 4.8.3. Creating RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE ( x86_64 + ppc64le ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on ppc64le architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and GRUB as second stage) on ppc64le : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 4.8.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0-ppc64le Ready worker 42d v1.28.2+e3ba6d9 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 worker-1-ppc64le Ready worker 42d v1.28.2+e3ba6d9 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 master-0-x86 Ready control-plane,master 75d v1.28.2+e3ba6d9 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 master-1-x86 Ready control-plane,master 75d v1.28.2+e3ba6d9 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 master-2-x86 Ready control-plane,master 75d v1.28.2+e3ba6d9 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 worker-0-x86 Ready worker 75d v1.28.2+e3ba6d9 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 worker-1-x86 Ready worker 75d v1.28.2+e3ba6d9 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.9. Managing your cluster with multi-architecture compute machines 4.9.1. Scheduling workloads on clusters with multi-architecture compute machines Deploying a workload on a cluster with compute nodes of different architectures requires attention and monitoring of your cluster. There might be further actions you need to take in order to successfully place pods in the nodes of your cluster. For more detailed information on node affinity, scheduling, taints and tolerlations, see the following documentatinon: Controlling pod placement using node taints . Controlling pod placement on nodes using node affinity Controlling pod placement using the scheduler 4.9.1.1. Sample multi-architecture node workload deployments Before you schedule workloads on a cluster with compute nodes of different architectures, consider the following use cases: Using node affinity to schedule workloads on a node You can allow a workload to be scheduled on only a set of nodes with architectures supported by its images, you can set the spec.affinity.nodeAffinity field in your pod's template specification. Example deployment with the nodeAffinity set to certain architectures apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: 1 - amd64 - arm64 1 Specify the supported architectures. Valid values include amd64 , arm64 , or both values. Tainting every node for a specific architecture You can taint a node to avoid workloads that are not compatible with its architecture to be scheduled on that node. In the case where your cluster is using a MachineSet object, you can add parameters to the .spec.template.spec.taints field to avoid workloads being scheduled on nodes with non-supported architectures. Before you can taint a node, you must scale down the MachineSet object or remove available machines. You can scale down the machine set by using one of following commands: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api For more information on scaling machine sets, see "Modifying a compute machine set". Example MachineSet with a taint set apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # ... spec: # ... template: # ... spec: # ... taints: - effect: NoSchedule key: multi-arch.openshift.io/arch value: arm64 You can also set a taint on a specific node by running the following command: USD oc adm taint nodes <node-name> multi-arch.openshift.io/arch=arm64:NoSchedule Creating a default toleration You can annotate a namespace so all of the workloads get the same default toleration by running the following command: USD oc annotate namespace my-namespace \ 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{"operator": "Exists", "effect": "NoSchedule", "key": "multi-arch.openshift.io/arch"}]' Tolerating architecture taints in workloads On a node with a defined taint, workloads will not be scheduled on that node. However, you can allow them to be scheduled by setting a toleration in the pod's specification. Example deployment with a toleration apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: tolerations: - key: "multi-arch.openshift.io/arch" value: "arm64" operator: "Equal" effect: "NoSchedule" This example deployment can also be allowed on nodes with the multi-arch.openshift.io/arch=arm64 taint specified. Using node affinity with taints and tolerations When a scheduler computes the set of nodes to schedule a pod, tolerations can broaden the set while node affinity restricts the set. If you set a taint to the nodes of a specific architecture, the following example toleration is required for scheduling pods. Example deployment with a node affinity and toleration set. apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - arm64 tolerations: - key: "multi-arch.openshift.io/arch" value: "arm64" operator: "Equal" effect: "NoSchedule" Additional resources Modifying a compute machine set 4.9.2. Importing manifest lists in image streams on your multi-architecture compute machines On an OpenShift Container Platform 4.14 cluster with multi-architecture compute machines, the image streams in the cluster do not import manifest lists automatically. You must manually change the default importMode option to the PreserveOriginal option in order to import the manifest list. Prerequisites You installed the OpenShift Container Platform CLI ( oc ). Procedure The following example command shows how to patch the ImageStream cli-artifacts so that the cli-artifacts:latest image stream tag is imported as a manifest list. USD oc patch is/cli-artifacts -n openshift -p '{"spec":{"tags":[{"name":"latest","importPolicy":{"importMode":"PreserveOriginal"}}]}}' Verification You can check that the manifest lists imported properly by inspecting the image stream tag. The following command will list the individual architecture manifests for a particular tag. USD oc get istag cli-artifacts:latest -n openshift -oyaml If the dockerImageManifests object is present, then the manifest list import was successful. Example output of the dockerImageManifests object dockerImageManifests: - architecture: amd64 digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: arm64 digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: ppc64le digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: s390x digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux
[ "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "az login", "az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1", "az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".url')", "BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.aarch64.vhd", "end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`", "sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`", "az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}", "az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy", "{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }", "az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}", "az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2", "RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)", "az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}", "az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0", "/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0", "oc create -f arm64-machine-set-0.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-arm64-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: \"<zone>\"", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-arm64-machine-set-0 2 2 2 2 10m", "oc get nodes", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc create -f aws-arm64-machine-set-0.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-aws-arm64-machine-set-0 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 5 machine.openshift.io/cluster-api-machine-type: <role> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-02a574449d4f4d280 8 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 9 instanceType: m6g.xlarge 10 kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned - name: <custom_tag_name> value: <custom_tag_value> userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.<arch>.images.aws.regions.\"<region>\".image'", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-arm64-machine-set-0 2 2 2 2 10m", "oc get nodes", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc create -f gcp-arm64-machine-set-0.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 5 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 6 region: us-central1 7 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.images.gcp'", "\"gcp\": { \"release\": \"415.92.202309142014-0\", \"project\": \"rhcos-cloud\", \"name\": \"rhcos-415-92-202309142014-0-gcp-aarch64\" }", "projects/<project>/global/images/<image_name>", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-arm64-machine-set-0 2 2 2 2 10m", "oc get nodes", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "apiVersion: v1 kind: ConfigMap data: disable-udp-aggregation: \"true\" metadata: name: udp-aggregation-config namespace: openshift-network-operator", "oc create -f udp-aggregation-config.yaml", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "apiVersion: v1 kind: ConfigMap data: disable-udp-aggregation: \"true\" metadata: name: udp-aggregation-config namespace: openshift-network-operator", "oc create -f udp-aggregation-config.yaml", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')", "virt-install --connect qemu:///system --name <vm_name> --autostart --os-variant rhel9.2 \\ 1 --cpu host --vcpus <vcpus> --memory <memory_mb> --disk <vm_name>.qcow2,size=<image_size> --network network=<virt_network_parm> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ 2 --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/vda\" --extra-args \"coreos.inst.ignition_url=<worker_ign>\" \\ 3 --extra-args \"coreos.live.rootfs_url=<rhcos_rootfs>\" \\ 4 --extra-args \"ip=<ip>::<default_gateway>:<subnet_mask_length>:<hostname>::none:<MTU>\" \\ 5 --extra-args \"nameserver=<dns>\" --extra-args \"console=ttysclp0\" --noautoconsole --wait", "osinfo-query os -f short-id", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0-ppc64le Ready worker 42d v1.28.2+e3ba6d9 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 worker-1-ppc64le Ready worker 42d v1.28.2+e3ba6d9 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 master-0-x86 Ready control-plane,master 75d v1.28.2+e3ba6d9 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 master-1-x86 Ready control-plane,master 75d v1.28.2+e3ba6d9 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 master-2-x86 Ready control-plane,master 75d v1.28.2+e3ba6d9 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 worker-0-x86 Ready worker 75d v1.28.2+e3ba6d9 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9 worker-1-x86 Ready worker 75d v1.28.2+e3ba6d9 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9", "apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: 1 - amd64 - arm64", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # spec: # template: # spec: # taints: - effect: NoSchedule key: multi-arch.openshift.io/arch value: arm64", "oc adm taint nodes <node-name> multi-arch.openshift.io/arch=arm64:NoSchedule", "oc annotate namespace my-namespace 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"multi-arch.openshift.io/arch\"}]'", "apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: tolerations: - key: \"multi-arch.openshift.io/arch\" value: \"arm64\" operator: \"Equal\" effect: \"NoSchedule\"", "apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - arm64 tolerations: - key: \"multi-arch.openshift.io/arch\" value: \"arm64\" operator: \"Equal\" effect: \"NoSchedule\"", "oc patch is/cli-artifacts -n openshift -p '{\"spec\":{\"tags\":[{\"name\":\"latest\",\"importPolicy\":{\"importMode\":\"PreserveOriginal\"}}]}}'", "oc get istag cli-artifacts:latest -n openshift -oyaml", "dockerImageManifests: - architecture: amd64 digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: arm64 digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: ppc64le digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: s390x digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/postinstallation_configuration/configuring-multi-architecture-compute-machines-on-an-openshift-cluster
5.24. chkconfig
5.24. chkconfig 5.24.1. RHBA-2012:0873 - chkconfig bug fix update Updated chkconfig packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The basic system utility chkconfig updates and queries runlevel information for system services. Bug Fixes BZ# 696305 When installing multiple Linux Standard Base (LSB) services which only had LSB headers, the stop priority of the related LSB init scripts could have been miscalculated and set to "-1". With this update, the LSB init script ordering mechanism has been fixed, and the stop priority of the LSB init scripts is now set correctly. BZ# 706854 When an LSB init script requiring the "USDlocal_fs" facility was installed with the "install_initd" command, the installation of the script could fail under certain circumstances. With this update, the underlying code has been modified to ignore this requirement because the "USDlocal_fs" facility is always implicitly provided. LSB init scripts with requirements on "USDlocal_fs" are now installed correctly. BZ# 771454 If an LSB init script contained "Required-Start" dependencies, but the LSB service installed was not configured to start in any runlevel, the dependencies could have been applied incorrectly. Consequently, the installation of the LSB service failed silently. With this update, chkconfig no longer strictly enforces "Required-Start" dependencies for installation if the service is not configured to start in any runlevel. LSB services are now installed as expected in this scenario. BZ# 771741 Previously, chkconfig did not handle dependencies between LSB init scripts correctly. Therefore, if an LSB service was enabled, LSB services that were depending on it could have been set up incorrectly. With this update, chkconfig has been modified to determine dependencies properly, and dependent LSB services are now set up as expected in this scenario. All users of chkconfig are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/chkconfig