title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 13. Configuring interface-level network sysctls | Chapter 13. Configuring interface-level network sysctls In Linux, sysctl allows an administrator to modify kernel parameters at runtime. You can modify interface-level network sysctls using the tuning Container Network Interface (CNI) meta plugin. The tuning CNI meta plugin operates in a chain with a main CNI plugin as illustrated. The main CNI plugin assigns the interface and passes this to the tuning CNI meta plugin at runtime. You can change some sysctls and several interface attributes (promiscuous mode, all-multicast mode, MTU, and MAC address) in the network namespace by using the tuning CNI meta plugin. In the tuning CNI meta plugin configuration, the interface name is represented by the IFNAME token, and is replaced with the actual name of the interface at runtime. Note In OpenShift Container Platform, the tuning CNI meta plugin only supports changing interface-level network sysctls. 13.1. Configuring the tuning CNI The following procedure configures the tuning CNI to change the interface-level network net.ipv4.conf.IFNAME.accept_redirects sysctl. This example enables accepting and sending ICMP-redirected packets. Procedure Create a network attachment definition, such as tuning-example.yaml , with the following content: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ "cniVersion": "0.4.0", 3 "name": "<name>", 4 "plugins": [{ "type": "<main_CNI_plugin>" 5 }, { "type": "tuning", 6 "sysctl": { "net.ipv4.conf.IFNAME.accept_redirects": "1" 7 } } ] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace. 2 Specifies the namespace that the object is associated with. 3 Specifies the CNI specification version. 4 Specifies the name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 5 Specifies the name of the main CNI plugin to configure. 6 Specifies the name of the CNI meta plugin. 7 Specifies the sysctl to set. An example yaml file is shown here: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ "cniVersion": "0.4.0", "name": "tuningnad", "plugins": [{ "type": "bridge" }, { "type": "tuning", "sysctl": { "net.ipv4.conf.IFNAME.accept_redirects": "1" } } ] }' Apply the yaml by running the following command: USD oc apply -f tuning-example.yaml Example output networkattachmentdefinition.k8.cni.cncf.io/tuningnad created Create a pod such as examplepod.yaml with the network attachment definition similar to the following: apiVersion: v1 kind: Pod metadata: name: tunepod namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: ["ALL"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault 1 Specify the name of the configured NetworkAttachmentDefinition . 2 runAsUser controls which user ID the container is run with. 3 runAsGroup controls which primary group ID the containers is run with. 4 allowPrivilegeEscalation determines if a pod can request to allow privilege escalation. If unspecified, it defaults to true. This boolean directly controls whether the no_new_privs flag gets set on the container process. 5 capabilities permit privileged actions without giving full root access. This policy ensures all capabilities are dropped from the pod. 6 runAsNonRoot: true requires that the container will run with a user with any UID other than 0. 7 RuntimeDefault enables the default seccomp profile for a pod or container workload. Apply the yaml by running the following command: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh tunepod Verify the values of the configured sysctl flags. For example, find the value net.ipv4.conf.net1.accept_redirects by running the following command: sh-4.4# sysctl net.ipv4.conf.net1.accept_redirects Expected output net.ipv4.conf.net1.accept_redirects = 1 13.2. Additional resources Using sysctls in containers | [
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ \"cniVersion\": \"0.4.0\", 3 \"name\": \"<name>\", 4 \"plugins\": [{ \"type\": \"<main_CNI_plugin>\" 5 }, { \"type\": \"tuning\", 6 \"sysctl\": { \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" 7 } } ] }",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" } } ] }'",
"oc apply -f tuning-example.yaml",
"networkattachmentdefinition.k8.cni.cncf.io/tuningnad created",
"apiVersion: v1 kind: Pod metadata: name: tunepod namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: [\"ALL\"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s",
"oc rsh tunepod",
"sh-4.4# sysctl net.ipv4.conf.net1.accept_redirects",
"net.ipv4.conf.net1.accept_redirects = 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/nodes-setting-interface-level-network-sysctls |
Chapter 2. Multiple Hosts | Chapter 2. Multiple Hosts 2.1. Using Multiple Hosts JBoss Data Virtualization may be clustered over several servers utilizing failover and load balancing . The easiest way to enable these features is for the client to specify multiple hostname and port number combinations in the URL connection string as a comma separated list of host:port combinations: If you are connecting with the data source class, the setAlternateServers method can be used to specify the failover servers. The format is also a comma separated list of host:port combinations. The client randomly selects one of the JBoss Data Virtualization servers from the list and establishes a session with that server. If a connection cannot be established, then each of the remaining servers will be tried in random order. This allows for both connection time failover and random server selection load balancing. | [
"jdbc:teiid:<vdb-name>@mm://host1:31000,host1:31001,host2:31000;version=2"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/chap-multiple_hosts |
Chapter 5. Installing Red Hat build of OpenJDK with the MSI installer | Chapter 5. Installing Red Hat build of OpenJDK with the MSI installer This procedure discribes how to install Red Hat build of OpenJDK 8 for Microsoft Windows using the MSI-based installer. Procedure Download the MSI-based installer of Red Hat build of OpenJDK 8 for Microsoft Windows. Run the installer Red Hat build of OpenJDK 8 for Microsoft Windows. Click on the welcome screen. Check I accept the terms in license agreement , then click . Click . Accept the defaults or review the optional properties . Click Install . Click Yes on the Do you want to allow this app to make changes on your device? . Verify the Red Hat build of OpenJDK 8 for Microsoft Windows is successfully installed, run java -version command in the command prompt and you must get the following output: | [
"java version \"1.8.0_181\" Java(TM) SE Runtime Environment (build 1.8.0_181-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/installing_and_using_red_hat_build_of_openjdk_8_for_windows/openjdk8-windows-installing-msiinstaller |
Chapter 3. Developing and running Camel K integrations | Chapter 3. Developing and running Camel K integrations This chapter explains how to set up your development environment and how to develop and deploy simple Camel K integrations written in Java and YAML. It also shows how to use the kamel command line to manage Camel K integrations at runtime. For example, this includes running, describing, logging, and deleting integrations. Section 3.1, "Setting up your Camel K development environment" Section 3.2, "Developing Camel K integrations in Java" Section 3.3, "Developing Camel K integrations in YAML" Section 3.4, "Running Camel K integrations" Section 3.5, "Running Camel K integrations in development mode" Section 3.6, "Running Camel K integrations using modeline" Section 3.7, "Camel Runtimes (aka "sourceless" Integrations)" Section 3.8, "Importing existing Camel applications" Section 3.9, "Build" Section 3.10, "Promoting across environments" 3.1. Setting up your Camel K development environment You must set up your environment with the recommended development tooling before you can automatically deploy the Camel K quick start tutorials. This section explains how to install the recommended Visual Studio (VS) Code IDE and the extensions that it provides for Camel K. Note The Camel K VS Code extensions are community features. VS Code is recommended for ease of use and the best developer experience of Camel K. This includes automatic completion of Camel DSL code and Camel K traits. However, you can manually enter your code and tutorial commands using your chosen IDE instead of VS Code. Prerequisites You must have access to an OpenShift cluster on which the Camel K Operator and OpenShift Serverless Operator are installed: Installing Camel K Installing OpenShift Serverless from the OperatorHub Procedure Install VS Code on your development platform. For example, on Red Hat Enterprise Linux: Install the required key and repository: USD sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc USD sudo sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/vscode.repo' Update the cache and install the VS Code package: USD yum check-update USD sudo yum install code For details on installing on other platforms, see the VS Code installation documentation . Enter the code command to launch the VS Code editor. For more details, see the VS Code command line documentation . Install the VS Code Camel Extension Pack, which includes the extensions required for Camel K. For example, in VS Code: In the left navigation bar, click Extensions . In the search box, enter Apache Camel . Select the Extension Pack for Apache Camel by Red Hat , and click Install . For more details, see the instructions for the Extension Pack for Apache Camel by Red Hat . Additional resources VS Code Getting Started documentation VS Code Tooling for Apache Camel K by Red Hat extension VS Code Language Support for Apache Camel by Red Hat extension Apache Camel K and VS Code tooling example To upgrade your Camel application from Camel 3.x to 3.y see, Camel 3.x Upgrade Guide . 3.2. Developing Camel K integrations in Java This section shows how to develop a simple Camel K integration in Java DSL. Writing an integration in Java to be deployed using Camel K is the same as defining your routing rules in Camel. However, you do not need to build and package the integration as a JAR when using Camel K. You can use any Camel component directly in your integration routes. Camel K automatically handles the dependency management and imports all the required libraries from the Camel catalog using code inspection. Prerequisites Setting up your Camel K development environment Procedure Enter the camel init command to generate a simple Java integration file. For example: USD camel init HelloCamelK.java Open the generated integration file in your IDE and edit as appropriate. For example, the HelloCamelK.java integration automatically includes the Camel timer and log components to help you get started: // camel-k: language=java import org.apache.camel.builder.RouteBuilder; public class HelloCamelK extends RouteBuilder { @Override public void configure() throws Exception { // Write your routes here, for example: from("timer:java?period=1s") .routeId("java") .setBody() .simple("Hello Camel K from USD{routeId}") .to("log:info"); } } steps Running Camel K integrations 3.3. Developing Camel K integrations in YAML This section explains how to develop a simple Camel K integration in YAML DSL. Writing an integration in YAML to be deployed using Camel K is the same as defining your routing rules in Camel. You can use any Camel component directly in your integration routes. Camel K automatically handles the dependency management and imports all the required libraries from the Camel catalog using code inspection. Prerequisites Setting up your Camel K development environment Procedure Enter the camel init command to generate a simple YAML integration file. For example: USD camel init hello.camelk.yaml Open the generated integration file in your IDE and edit as appropriate. For example, the hello.camelk.yaml integration automatically includes the Camel timer and log components to help you get started: # Write your routes here, for example: - from: uri: "timer:yaml" parameters: period: "1s" steps: - set-body: constant: "Hello Camel K from yaml" - to: "log:info" 3.4. Running Camel K integrations You can run Camel K integrations in the cloud on your OpenShift cluster from the command line using the kamel run command. Prerequisites Setting up your Camel K development environment . You must already have a Camel integration written in Java or YAML DSL. Procedure Log into your OpenShift cluster using the oc client tool, for example: USD oc login --token=my-token --server=https://my-cluster.example.com:6443 Ensure that the Camel K Operator is running, for example: USD oc get pod NAME READY STATUS RESTARTS AGE camel-k-operator-86b8d94b4-pk7d6 1/1 Running 0 6m28s Enter the kamel run command to run your integration in the cloud on OpenShift. For example: Java example USD kamel run HelloCamelK.java integration "hello-camel-k" created YAML example USD kamel run hello.camelk.yaml integration "hello" created Enter the kamel get command to check the status of the integration: USD kamel get NAME PHASE KIT hello Building Kit myproject/kit-bq666mjej725sk8sn12g When the integration runs for the first time, Camel K builds the integration kit for the container image, which downloads all the required Camel modules and adds them to the image classpath. Enter kamel get again to verify that the integration is running: USD kamel get NAME PHASE KIT hello Running myproject/kit-bq666mjej725sk8sn12g Enter the kamel log command to print the log to stdout : USD kamel log hello [1] 2021-08-11 17:58:40,573 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [1] 2021-08-11 17:58:40,653 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [1] 2021-08-11 17:58:40,844 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='camel-k-embedded-flow', language='yaml', location='file:/etc/camel/sources/camel-k-embedded-flow.yaml', } [1] 2021-08-11 17:58:41,216 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [1] 2021-08-11 17:58:41,217 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (timer://yaml) [1] 2021-08-11 17:58:41,217 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 136ms (build:0ms init:100ms start:36ms) [1] 2021-08-11 17:58:41,268 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 2.064s. [1] 2021-08-11 17:58:41,269 INFO [io.quarkus] (main) Profile prod activated. [1] 2021-08-11 17:58:41,269 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, camel-yaml-dsl, cdi] [1] 2021-08-11 17:58:42,423 INFO [info] (Camel (camel-1) thread #0 - timer://yaml) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from yaml] ... Press Ctrl-C to terminate logging in the terminal. Additional resources For more details on the kamel run command, enter kamel run --help For faster deployment turnaround times, see Running Camel K integrations in development mode For details of development tools to run integrations, see VS Code Tooling for Apache Camel K by Red Hat See also Managing Camel K integrations Running An Integration Without CLI You can run an integration without a CLI (Command Line Interface) and create an Integration Custom Resource with the configuration to run your application. For example, execute the following sample route. It returns the expected Integration Custom Resource. Save this custom resource in a yaml file, my-integration.yaml . Now, run the integration that contains the Integration Custom Resource using the oc command line, the UI, or the API to call the OpenShift cluster. In the following example, oc CLI is used from the command line. The operator runs the Integration. Note Kubernetes supports Structural Schemas for CustomResourceDefinitions. For more details about Camel K traits see, Camel K trait configuration reference . Schema changes on Custom Resources The strongly-typed Trait API imposes changes on the following CustomResourceDefinitions: integrations , integrationkits', and `integrationplatforms. Trait properties under spec.traits.<trait-id>.configuration are now defined directly under spec.traits.<trait-id>. vvv Backward compatibility is possible in this implementation. To achieve backward compatibility, the Configuration field with RawMessage type is provided for each trait type, so that the existing integrations and resources are read from the new Red Hat build of Apache Camel K version. When the old integrations and resources are read, the legacy configuration in each trait (if any) is migrated to the new Trait API fields. If the values are predefined on the new API fields, they precede the legacy ones. 3.5. Running Camel K integrations in development mode You can run Camel K integrations in development mode on your OpenShift cluster from the command line. Using development mode, you can iterate quickly on integrations in development and get fast feedback on your code. When you specify the kamel run command with the --dev option, this deploys the integration in the cloud immediately and shows the integration logs in the terminal. You can then change the code and see the changes automatically applied instantly to the remote integration Pod on OpenShift. The terminal automatically displays all redeployments of the remote integration in the cloud. Note The artifacts generated by Camel K in development mode are identical to those that you run in production. The purpose of development mode is faster development. Prerequisites Setting up your Camel K development environment . You must already have a Camel integration written in Java or YAML DSL. Procedure Log into your OpenShift cluster using the oc client tool, for example: USD oc login --token=my-token --server=https://my-cluster.example.com:6443 Ensure that the Camel K Operator is running, for example: USD oc get pod NAME READY STATUS RESTARTS AGE camel-k-operator-86b8d94b4-pk7d6 1/1 Running 0 6m28s Enter the kamel run command with --dev to run your integration in development mode on OpenShift in the cloud. The following shows a simple Java example: USD kamel run HelloCamelK.java --dev Condition "IntegrationPlatformAvailable" is "True" for Integration hello-camel-k: test/camel-k Integration hello-camel-k in phase "Initialization" Integration hello-camel-k in phase "Building Kit" Condition "IntegrationKitAvailable" is "True" for Integration hello-camel-k: kit-c49sqn4apkb4qgn55ak0 Integration hello-camel-k in phase "Deploying" Progress: integration "hello-camel-k" in phase Initialization Progress: integration "hello-camel-k" in phase Building Kit Progress: integration "hello-camel-k" in phase Deploying Integration hello-camel-k in phase "Running" Condition "DeploymentAvailable" is "True" for Integration hello-camel-k: deployment name is hello-camel-k Progress: integration "hello-camel-k" in phase Running Condition "CronJobAvailable" is "False" for Integration hello-camel-k: different controller strategy used (deployment) Condition "KnativeServiceAvailable" is "False" for Integration hello-camel-k: different controller strategy used (deployment) Condition "Ready" is "False" for Integration hello-camel-k Condition "Ready" is "True" for Integration hello-camel-k [1] Monitoring pod hello-camel-k-7f85df47b8-js7cb ... ... [1] 2021-08-11 18:34:44,069 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [1] 2021-08-11 18:34:44,167 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [1] 2021-08-11 18:34:44,362 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='HelloCamelK', language='java', location='file:/etc/camel/sources/HelloCamelK.java', } [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started java (timer://java) [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 243ms (build:0ms init:213ms start:30ms) [1] 2021-08-11 18:34:46,190 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 3.457s. [1] 2021-08-11 18:34:46,190 INFO [io.quarkus] (main) Profile prod activated. [1] 2021-08-11 18:34:46,191 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-java-joor-dsl, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, cdi] [1] 2021-08-11 18:34:47,200 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [1] 2021-08-11 18:34:48,180 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [1] 2021-08-11 18:34:49,180 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] ... Edit the content of your integration DSL file, save your changes, and see the changes displayed instantly in the terminal. For example: ... integration "hello-camel-k" updated ... [2] 2021-08-11 18:40:54,173 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [2] 2021-08-11 18:40:54,209 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [2] 2021-08-11 18:40:54,301 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='HelloCamelK', language='java', location='file:/etc/camel/sources/HelloCamelK.java', } [2] 2021-08-11 18:40:55,796 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [2] 2021-08-11 18:40:55,796 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started java (timer://java) [2] 2021-08-11 18:40:55,797 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 174ms (build:0ms init:147ms start:27ms) [2] 2021-08-11 18:40:55,803 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 3.025s. [2] 2021-08-11 18:40:55,808 INFO [io.quarkus] (main) Profile prod activated. [2] 2021-08-11 18:40:55,809 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-java-joor-dsl, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, cdi] [2] 2021-08-11 18:40:56,810 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [2] 2021-08-11 18:40:57,793 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] ... Press Ctrl-C to terminate logging in the terminal. Additional resources For more details on the kamel run command, enter kamel run --help For details of development tools to run integrations, see VS Code Tooling for Apache Camel K by Red Hat Managing Camel K integrations Configuring Camel K integration dependencies 3.6. Running Camel K integrations using modeline You can use the Camel K modeline to specify multiple configuration options in a Camel K integration source file, which are executed at runtime. This creates efficiencies by saving you the time of re-entering multiple command line options and helps to prevent input errors. The following example shows a modeline entry from a Java integration file that enables 3scale and limits the integration container memory. Prerequisites Setting up your Camel K development environment You must already have a Camel integration written in Java or YAML DSL. Procedure Add a Camel K modeline entry to your integration file. For example: ThreeScaleRest.java // camel-k: trait=3scale.enabled=true trait=container.limit-memory=256Mi 1 import org.apache.camel.builder.RouteBuilder; public class ThreeScaleRest extends RouteBuilder { @Override public void configure() throws Exception { rest().get("/") .to("direct:x"); from("direct:x") .setBody().constant("Hello"); } } Enables both the container and 3scale traits, to expose the route through 3scale and to limit the container memory. Run the integration, for example: The kamel run command outputs any modeline options specified in the integration, for example: Modeline options have been loaded from source files Full command: kamel run ThreeScaleRest.java --trait=3scale.enabled=true --trait=container.limit-memory=256Mi Additional resources Camel K modeline options For details of development tools to run modeline integrations, see Introducing IDE support for Apache Camel K Modeline . 3.7. Camel Runtimes (aka "sourceless" Integrations) Camel K can run any runtime available in Apache Camel. However, this is possible only when the Camel application was previously built and packaged into a container image. Also, if you run through this option, some of the features offered by the operator may not be available. For example, you cannot discover Camel capabilities because the source is not available to the operator but embedded in the container image. This option is good if you are building your applications externally, that is, via a CICD technology, and you want to delegate the operator only the "operational" part, taking care on your own of the building and publishing part. Note You may loose more features, such as incremental image and container kit reusability. 3.7.1. Build externally, run via Operator Let us see the following example. You can have your own Camel application or just create a basic one for the purpose via Camel JBang ( camel init test.yaml ). Once your development is over, you can test locally via camel run test.yaml and export in the runtime of your choice via camel export test.yaml --runtime ... . The above step is a quick way to create a basic Camel application in any of the available runtime. Let us imagine we have done this for Camel Main or we already have a Camel application as a Maven project. As we want to take care of the build part by ourselves, we create a pipeline to build, containerize and push the container to a registry (see as a reference Camel K Tekton example ). At this stage we do have a container image with our Camel application. We can use the kamel CLI to run our Camel application via kamel run --image docker.io/my-org/my-app:1.0.0 tuning, if it is the case, with any of the trait or configuration required. Remember that when you run an Integration with this option, the operator creates a synthetic IntegrationKit. Note Certain traits (that is, builder traits) are not available when running an application built externally. In a few seconds (there is no build involved) you must have your application up and running and you can monitor and operate with Camel K as usual. 3.7.2. Traits and dependencies Certain Camel K operational aspect may be driven by traits. When you are building the application outside the operator, some of those traits are not executed as they are executed during the building phase that we are skipping when running sourceless Integrations . 3.8. Importing existing Camel applications You already have a Camel application running on your cluster, and you have created it via a manual deployment, a CICD or any other deployment mechanism you have in place. Since the Camel K operator is meant to operate any Camel application out there, you are able to import it and monitor in a similar method of any other Camel K managed Integration . This feature is disabled by default. To enable it, you must run the operator deployment with an environment variable, CAMEL_K_SYNTHETIC_INTEGRATIONS , set to true . Note You are only able to monitor the synthetic Integrations. Camel K does not alter the lifecycle of non managed Integrations (that is, rebuild the original application). Important The operator does not alter any field of the original application to avoid breaking any deployment procedure which is already in place. As it cannot make any assumption on the way the application is built and deployed, it is only able to watch for any changes happening around it. 3.8.1. Deploy externally, monitor via Camel K Operator An imported Integration is known as synthetic Integration . You can import any Camel application deployed as a Deployment , CronJob or Knative Service . We control this behavior via a label ( camel.apache.org/integration ) that the user must apply on the Camel application (either manually or introducing in the deployment process, that is, via CICD). Note The example here works in a similar way using CronJob and Knative Service. As an example, we show how to import a Camel application which was deployed with the Deployment kind. Let us assume it is called my-deploy . USD oc label deploy my-camel-sb-svc camel.apache.org/integration=my-it The operator immediately creates a synthetic Integration. USD oc get it NAMESPACE NAME PHASE RUNTIME PROVIDER RUNTIME VERSION KIT REPLICAS test-79c385c3-d58e-4c28-826d-b14b6245f908 my-it Running You can see, it is in Running status phase. However, after checking the conditions, you now see that the Integration is not yet fully monitored. This is expected because of the way Camel K operator monitor Pods. It requires the same label applied to the Deployment is inherited by the generated Pods. For this reason, beside labelling the Deployment, we must add a label in the Deployment template. USD oc patch deployment my-camel-sb-svc --patch '{"spec": {"template": {"metadata": {"labels": {"camel.apache.org/integration": "my-it"}}}}}' This operation can also be performed manually or automated in the deployment procedure. We can now see that the operator is able to monitor the status of the Pods. USD oc get it NAMESPACE NAME PHASE RUNTIME PROVIDER RUNTIME VERSION KIT REPLICAS test-79c385c3-d58e-4c28-826d-b14b6245f908 my-it Running 1 From now on, you are able to monitor the status of the synthetic Integration in a similar method as you do with managed Integrations. If, for example, your Deployment scales up or down, then, you see this information reflecting accordingly. USD oc scale deployment my-camel-sb-svc --replicas 2 USD oc get it NAMESPACE NAME PHASE RUNTIME PROVIDER RUNTIME VERSION KIT REPLICAS test-79c385c3-d58e-4c28-826d-b14b6245f908 my-it Running 2 3.9. Build A Build resource describes the process of assembling a container image that copes with the requirement of an Integration or IntegrationKit . The result of a build is an IntegrationKit that must be reused for multiple Integrations . type Build struct { Spec BuildSpec 1 Status BuildStatus 2 } type BuildSpec struct { Tasks []Task 3 } 1 The desired state 2 The status of the object at current time 3 The build tasks Note The full go definition can be found here . 3.9.1. Build strategy You can choose from different build strategies. The build strategy defines how a build must be executed and following are the available strategies. buildStrategy: pod (each build is ran in a separate pod, the operator monitors the pod state) buildStrategy: routine (each build is ran as a go routine inside the operator pod) Note Routine is the default strategy. The following description allows you to decide when to use which strategy. Routine : provides slightly faster builds as no additional pod is started, and loaded build dependencies (e.g. Maven dependencies) are cached between builds. Good for normal amount of builds being executed and only few builds running in parallel. Pod : prevents memory pressure on the operator as the build does not consume CPU and memory from the operator go runtime. Good for many builds being executed and many parallel builds. 3.9.2. Build queues IntegrationKits and its base images must be reused for multiple Integrations to accomplish an efficient resource management and to optimize build and startup times for Camel K Integrations. To reuse images, the operator is going to queue builds in sequential order. This way the operator is able to use efficient image layering for Integrations. Note By default, builds are queued sequentially based on their layout (e.g. native, fast-jar) and the build namespace. However, builds may not run sequentially but in parallel to each other based on certain criteria. For instance, native builds will always run in parallel to other builds. Also, when the build requires to run with a custom IntegrationPlatform it may run in parallel to other builds that run with the default operator IntegrationPlatform. In general, when there is no chance to reuse the build's image layers, the build is eager to run in parallel to other builds. Therefore, to avoid having many builds running in parallel, the operator uses a maximum number of running builds setting that limits the amount of builds running. You can set this limit in the IntegrationPlatform settings. The default values for this limitation is based on the build strategy. buildStrategy: pod (MaxRunningBuilds=10) buildStrategy: routine (MaxRunningBuilds=3) 3.10. Promoting across environments As soon as you have an Integration running in your cluster, you can move that integration to a higher environment. That is, you can test your integration in a development environment, and, after obtaining the result, you can move it into a production environment. Camel K achieves this goal by using the kamel promote command. With this command you can move an integration from one namespace to another. Prerequisites Setting up your Camel K development environment You must already have a Camel integration written in Java or YAML DSL. Ensure that both the source operator and the destination operator are using the same container registry, default registry (if Camel K operator is installed via OperatorHub) is registry.redhat.io Also ensure that the destination namespace provides the required Configmaps, Secrets or Kamelets required by the integration. Note To use the same container registry, you can use the --registry option during installation phase or change the IntegrationPlatform to reflect that accordingly. Code example Following is a simple integration that uses a Configmap to expose some message on an HTTP endpoint. You can start creating such an integration and testing in a namespace called development . kubectl create configmap my-cm --from-literal=greeting="hello, I am development!" -n development PromoteServer.java import org.apache.camel.builder.RouteBuilder; public class PromoteServer extends RouteBuilder { @Override public void configure() throws Exception { from("platform-http:/hello?httpMethodRestrict=GET").setBody(simple("resource:classpath:greeting")); } } Now run it. kamel run --dev -n development PromoteServer.java --config configmap:my-cm [-t service.node-port=true] You must tweak the service trait, depending on the Kubernetes platform and the level of exposure you want to provide. After that you can test it. curl http://192.168.49.2:32116/hello hello, I am development! After testing of your integration, you can move it to a production environment. You must have the destination environment (a Openshift namespace) ready with an operator (sharing the same operator source container registry) and any configuration, such as the configmap you have used here. For that scope, create one on the destination namespace. kubectl create configmap my-cm --from-literal=greeting="hello, I am production!" -n production Note For security reasons, there is a check to ensure that the expected resources such as Configmaps, Secrets and Kamelets are present on the destination. If any of these resources are missing, the integration does not move. You can now promote your integration. kamel promote promote-server -n development --to production kamel logs promote-server -n production Test the promoted integration. curl http://192.168.49.2:30764/hello hello, I am production! Since the Integration is reusing the same container image, the new application is executed immediately. Also, the immutability of the Integration is assured as the container used is exactly the same as the one tested in development (changes are just the configurations). Note The integration running in the test is not altered in any way and keeps running until you stop it. | [
"sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc sudo sh -c 'echo -e \"[code]\\nname=Visual Studio Code\\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\\nenabled=1\\ngpgcheck=1\\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc\" > /etc/yum.repos.d/vscode.repo'",
"yum check-update sudo yum install code",
"camel init HelloCamelK.java",
"// camel-k: language=java import org.apache.camel.builder.RouteBuilder; public class HelloCamelK extends RouteBuilder { @Override public void configure() throws Exception { // Write your routes here, for example: from(\"timer:java?period=1s\") .routeId(\"java\") .setBody() .simple(\"Hello Camel K from USD{routeId}\") .to(\"log:info\"); } }",
"camel init hello.camelk.yaml",
"Write your routes here, for example: - from: uri: \"timer:yaml\" parameters: period: \"1s\" steps: - set-body: constant: \"Hello Camel K from yaml\" - to: \"log:info\"",
"oc login --token=my-token --server=https://my-cluster.example.com:6443",
"oc get pod NAME READY STATUS RESTARTS AGE camel-k-operator-86b8d94b4-pk7d6 1/1 Running 0 6m28s",
"kamel run HelloCamelK.java integration \"hello-camel-k\" created",
"kamel run hello.camelk.yaml integration \"hello\" created",
"kamel get NAME PHASE KIT hello Building Kit myproject/kit-bq666mjej725sk8sn12g",
"kamel get NAME PHASE KIT hello Running myproject/kit-bq666mjej725sk8sn12g",
"kamel log hello [1] 2021-08-11 17:58:40,573 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [1] 2021-08-11 17:58:40,653 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [1] 2021-08-11 17:58:40,844 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='camel-k-embedded-flow', language='yaml', location='file:/etc/camel/sources/camel-k-embedded-flow.yaml', } [1] 2021-08-11 17:58:41,216 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [1] 2021-08-11 17:58:41,217 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (timer://yaml) [1] 2021-08-11 17:58:41,217 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 136ms (build:0ms init:100ms start:36ms) [1] 2021-08-11 17:58:41,268 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 2.064s. [1] 2021-08-11 17:58:41,269 INFO [io.quarkus] (main) Profile prod activated. [1] 2021-08-11 17:58:41,269 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, camel-yaml-dsl, cdi] [1] 2021-08-11 17:58:42,423 INFO [info] (Camel (camel-1) thread #0 - timer://yaml) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from yaml]",
"kamel run Sample.java -o yaml",
"apiVersion: camel.apache.org/v1 kind: Integration metadata: creationTimestamp: null name: my-integration namespace: default spec: sources: - content: \" import org.apache.camel.builder.RouteBuilder; public class Sample extends RouteBuilder { @Override public void configure() throws Exception { from(\\\"timer:tick\\\") .log(\\\"Hello Integration!\\\"); } }\" name: Sample.java status: {}",
"apply -f my-integration.yaml integration.camel.apache.org/my-integration created",
"traits: container: configuration: enabled: true name: my-integration",
"traits: container: enabled: true name: my-integration",
"type Trait struct { // Can be used to enable or disable a trait. All traits share this common property. Enabled *bool `property:\"enabled\" json:\"enabled,omitempty\"` // Legacy trait configuration parameters. // Deprecated: for backward compatibility. Configuration *Configuration `json:\"configuration,omitempty\"` } // Deprecated: for backward compatibility. type Configuration struct { RawMessage `json:\",inline\"` }",
"oc login --token=my-token --server=https://my-cluster.example.com:6443",
"oc get pod NAME READY STATUS RESTARTS AGE camel-k-operator-86b8d94b4-pk7d6 1/1 Running 0 6m28s",
"kamel run HelloCamelK.java --dev Condition \"IntegrationPlatformAvailable\" is \"True\" for Integration hello-camel-k: test/camel-k Integration hello-camel-k in phase \"Initialization\" Integration hello-camel-k in phase \"Building Kit\" Condition \"IntegrationKitAvailable\" is \"True\" for Integration hello-camel-k: kit-c49sqn4apkb4qgn55ak0 Integration hello-camel-k in phase \"Deploying\" Progress: integration \"hello-camel-k\" in phase Initialization Progress: integration \"hello-camel-k\" in phase Building Kit Progress: integration \"hello-camel-k\" in phase Deploying Integration hello-camel-k in phase \"Running\" Condition \"DeploymentAvailable\" is \"True\" for Integration hello-camel-k: deployment name is hello-camel-k Progress: integration \"hello-camel-k\" in phase Running Condition \"CronJobAvailable\" is \"False\" for Integration hello-camel-k: different controller strategy used (deployment) Condition \"KnativeServiceAvailable\" is \"False\" for Integration hello-camel-k: different controller strategy used (deployment) Condition \"Ready\" is \"False\" for Integration hello-camel-k Condition \"Ready\" is \"True\" for Integration hello-camel-k [1] Monitoring pod hello-camel-k-7f85df47b8-js7cb [1] 2021-08-11 18:34:44,069 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [1] 2021-08-11 18:34:44,167 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [1] 2021-08-11 18:34:44,362 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='HelloCamelK', language='java', location='file:/etc/camel/sources/HelloCamelK.java', } [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started java (timer://java) [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 243ms (build:0ms init:213ms start:30ms) [1] 2021-08-11 18:34:46,190 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 3.457s. [1] 2021-08-11 18:34:46,190 INFO [io.quarkus] (main) Profile prod activated. [1] 2021-08-11 18:34:46,191 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-java-joor-dsl, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, cdi] [1] 2021-08-11 18:34:47,200 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [1] 2021-08-11 18:34:48,180 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [1] 2021-08-11 18:34:49,180 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java]",
"integration \"hello-camel-k\" updated [2] 2021-08-11 18:40:54,173 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [2] 2021-08-11 18:40:54,209 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [2] 2021-08-11 18:40:54,301 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='HelloCamelK', language='java', location='file:/etc/camel/sources/HelloCamelK.java', } [2] 2021-08-11 18:40:55,796 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [2] 2021-08-11 18:40:55,796 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started java (timer://java) [2] 2021-08-11 18:40:55,797 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 174ms (build:0ms init:147ms start:27ms) [2] 2021-08-11 18:40:55,803 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 3.025s. [2] 2021-08-11 18:40:55,808 INFO [io.quarkus] (main) Profile prod activated. [2] 2021-08-11 18:40:55,809 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-java-joor-dsl, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, cdi] [2] 2021-08-11 18:40:56,810 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [2] 2021-08-11 18:40:57,793 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java]",
"// camel-k: trait=3scale.enabled=true trait=container.limit-memory=256Mi 1 import org.apache.camel.builder.RouteBuilder; public class ThreeScaleRest extends RouteBuilder { @Override public void configure() throws Exception { rest().get(\"/\") .to(\"direct:x\"); from(\"direct:x\") .setBody().constant(\"Hello\"); } }",
"kamel run ThreeScaleRest.java",
"Modeline options have been loaded from source files Full command: kamel run ThreeScaleRest.java --trait=3scale.enabled=true --trait=container.limit-memory=256Mi",
"oc label deploy my-camel-sb-svc camel.apache.org/integration=my-it",
"oc get it NAMESPACE NAME PHASE RUNTIME PROVIDER RUNTIME VERSION KIT REPLICAS test-79c385c3-d58e-4c28-826d-b14b6245f908 my-it Running",
"oc patch deployment my-camel-sb-svc --patch '{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"camel.apache.org/integration\": \"my-it\"}}}}}'",
"oc get it NAMESPACE NAME PHASE RUNTIME PROVIDER RUNTIME VERSION KIT REPLICAS test-79c385c3-d58e-4c28-826d-b14b6245f908 my-it Running 1",
"oc scale deployment my-camel-sb-svc --replicas 2 oc get it NAMESPACE NAME PHASE RUNTIME PROVIDER RUNTIME VERSION KIT REPLICAS test-79c385c3-d58e-4c28-826d-b14b6245f908 my-it Running 2",
"type Build struct { Spec BuildSpec 1 Status BuildStatus 2 } type BuildSpec struct { Tasks []Task 3 }",
"create configmap my-cm --from-literal=greeting=\"hello, I am development!\" -n development",
"import org.apache.camel.builder.RouteBuilder; public class PromoteServer extends RouteBuilder { @Override public void configure() throws Exception { from(\"platform-http:/hello?httpMethodRestrict=GET\").setBody(simple(\"resource:classpath:greeting\")); } }",
"kamel run --dev -n development PromoteServer.java --config configmap:my-cm [-t service.node-port=true]",
"curl http://192.168.49.2:32116/hello hello, I am development!",
"create configmap my-cm --from-literal=greeting=\"hello, I am production!\" -n production",
"kamel promote promote-server -n development --to production kamel logs promote-server -n production",
"curl http://192.168.49.2:30764/hello hello, I am production!"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/getting_started_with_camel_k/developing-and-running-camel-k-integrations |
Chapter 11. Displaying system security classification | Chapter 11. Displaying system security classification As an administrator of deployments where the user must be aware of the security classification of the system, you can set up a notification of the security classification. This can be either a permanent banner or a temporary notification, and it can appear on login screen, in the GNOME session, and on the lock screen. 11.1. Enabling system security classification banners You can create a permanent classification banner to state the overall security classification level of the system. This is useful for deployments where the user must always be aware of the security classification level of the system that they are logged into. The permanent classification banner can appear within the running session, the lock screen, and login screen, and customize its background color, its font, and its position within the screen. This procedure creates a red banner with a white text placed on both the top and bottom of the login screen. Procedure Install the gnome-shell-extension-classification-banner package: Note The package is only available in RHEL 8.6 and later. Create the 99-class-banner file at either of the following locations: To configure a notification at the login screen, create /etc/dconf/db/gdm.d/99-class-banner . To configure a notification in the user session, create /etc/dconf/db/local.d/99-class-banner . Enter the following configuration in the created file: Warning This configuration overrides similar configuration files that also enable an extension, such as Notifying of the system security classification . To enable multiple extensions, specify all of them in the enabled-extensions list. For example: Update the dconf database: Reboot the system. Troubleshooting If the classification banners are not displayed for an existing user, log in as the user and enable the Classification banner extension using the Tweaks application. 11.2. Notifying of the system security classification You can set up a notification that contains a predefined message in an overlay banner. This is useful for deployments where the user is required to read the security classification of the system before logging in. Depending on your configuration, the notification can appear at the login screen, after logging in, on the lock screen, or after a longer time with no user activity. You can always dismiss the notification when it appears. Procedure Install the gnome-shell-extension-heads-up-display package: Create the 99-hud-message file at either of the following locations: To configure a notification at the login screen, create /etc/dconf/db/gdm.d/99-hud-message . To configure a notification in the user session, create /etc/dconf/db/local.d/99-hud-message . Enter the following configuration in the created file: Replace the following values with text that describes the security classification of your system: Security classification title A short heading that identifies the security classification. Security classification description A longer message that provides additional details, such as references to various guidelines. Warning This configuration overrides similar configuration files that also enable an extension, such as Enabling system security classification banners . To enable multiple extensions, specify all of them in the enabled-extensions list. For example: Update the dconf database: Reboot the system. Troubleshooting If the notifications are not displayed for an existing user, log in as the user and enable the Heads-up display message extension using the Tweaks application. | [
"yum install gnome-shell-extension-classification-banner",
"[org/gnome/shell] enabled-extensions=['[email protected]'] [org/gnome/shell/extensions/classification-banner] background-color=' rgba(200,16,46,0.75) ' message=' TOP SECRET ' top-banner= true bottom-banner= true system-info= true color=' rgb(255,255,255) '",
"enabled-extensions=['[email protected]', '[email protected]']",
"dconf update",
"yum install gnome-shell-extension-heads-up-display",
"[org/gnome/shell] enabled-extensions=['[email protected]'] [org/gnome/shell/extensions/heads-up-display] message-heading=\" Security classification title \" message-body=\" Security classification description \" The following options control when the notification appears: show-when-locked= true show-when-unlocking= true show-when-unlocked= true",
"enabled-extensions=['[email protected]', '[email protected]']",
"dconf update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/assembly_displaying-the-system-security-classification_using-the-desktop-environment-in-rhel-8 |
Chapter 45. RemoteStorageManager schema reference | Chapter 45. RemoteStorageManager schema reference Used in: TieredStorageCustom Property Property type Description className string The class name for the RemoteStorageManager implementation. classPath string The class path for the RemoteStorageManager implementation. config map The additional configuration map for the RemoteStorageManager implementation. Keys will be automatically prefixed with rsm.config. , and added to Kafka broker configuration. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-RemoteStorageManager-reference |
Managing model registries | Managing model registries Red Hat OpenShift AI Cloud Service 1 Managing model registries in Red Hat OpenShift AI Cloud Service | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_model_registries/index |
Chapter 3. MachineAutoscaler [autoscaling.openshift.io/v1beta1] | Chapter 3. MachineAutoscaler [autoscaling.openshift.io/v1beta1] Description MachineAutoscaler is the Schema for the machineautoscalers API Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of constraints of a scalable resource status object Most recently observed status of a scalable resource 3.1.1. .spec Description Specification of constraints of a scalable resource Type object Required maxReplicas minReplicas scaleTargetRef Property Type Description maxReplicas integer MaxReplicas constrains the maximal number of replicas of a scalable resource minReplicas integer MinReplicas constrains the minimal number of replicas of a scalable resource scaleTargetRef object ScaleTargetRef holds reference to a scalable resource 3.1.2. .spec.scaleTargetRef Description ScaleTargetRef holds reference to a scalable resource Type object Required kind name Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name specifies a name of an object, e.g. worker-us-east-1a. Scalable resources are expected to exist under a single namespace. 3.1.3. .status Description Most recently observed status of a scalable resource Type object Property Type Description lastTargetRef object LastTargetRef holds reference to the recently observed scalable resource 3.1.4. .status.lastTargetRef Description LastTargetRef holds reference to the recently observed scalable resource Type object Required kind name Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name specifies a name of an object, e.g. worker-us-east-1a. Scalable resources are expected to exist under a single namespace. 3.2. API endpoints The following API endpoints are available: /apis/autoscaling.openshift.io/v1beta1/machineautoscalers GET : list objects of kind MachineAutoscaler /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers DELETE : delete collection of MachineAutoscaler GET : list objects of kind MachineAutoscaler POST : create a MachineAutoscaler /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers/{name} DELETE : delete a MachineAutoscaler GET : read the specified MachineAutoscaler PATCH : partially update the specified MachineAutoscaler PUT : replace the specified MachineAutoscaler /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers/{name}/status GET : read status of the specified MachineAutoscaler PATCH : partially update status of the specified MachineAutoscaler PUT : replace status of the specified MachineAutoscaler 3.2.1. /apis/autoscaling.openshift.io/v1beta1/machineautoscalers HTTP method GET Description list objects of kind MachineAutoscaler Table 3.1. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscalerList schema 401 - Unauthorized Empty 3.2.2. /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers HTTP method DELETE Description delete collection of MachineAutoscaler Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineAutoscaler Table 3.3. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscalerList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineAutoscaler Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body MachineAutoscaler schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 201 - Created MachineAutoscaler schema 202 - Accepted MachineAutoscaler schema 401 - Unauthorized Empty 3.2.3. /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the MachineAutoscaler HTTP method DELETE Description delete a MachineAutoscaler Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineAutoscaler Table 3.10. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineAutoscaler Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineAutoscaler Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body MachineAutoscaler schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 201 - Created MachineAutoscaler schema 401 - Unauthorized Empty 3.2.4. /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers/{name}/status Table 3.16. Global path parameters Parameter Type Description name string name of the MachineAutoscaler HTTP method GET Description read status of the specified MachineAutoscaler Table 3.17. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineAutoscaler Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.19. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineAutoscaler Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.21. Body parameters Parameter Type Description body MachineAutoscaler schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 201 - Created MachineAutoscaler schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/autoscale_apis/machineautoscaler-autoscaling-openshift-io-v1beta1 |
Chapter 6. Installing a cluster on OpenStack with Kuryr on your own infrastructure | Chapter 6. Installing a cluster on OpenStack with Kuryr on your own infrastructure Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. In OpenShift Container Platform version 4.14, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.14 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 6.2. About Kuryr SDN Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 6.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 6.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Server groups 2 - plus 1 for each additional availability zone in each machine pool Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 6.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 6.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 6.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. 6.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. 6.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Kuryr SDN does not support automatic unidling by a service. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. 6.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 6.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 6.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 6.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.5. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 6.6. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.14 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 6.10. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 6.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 6.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 6.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 6.12. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 6.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. Additional resources Installation configuration parameters for OpenStack 6.13.1. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 6.13.2. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OVN-Kubernetes network plugin, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType . This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 The cluster network plugin to install. The supported values are Kuryr , OVNKubernetes , and OpenShiftSDN . The default value is OVNKubernetes . 3 4 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 6.13.3. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 6.13.3.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 6.13.3.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 6.13.4. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 6.13.5. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 6.13.6. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 6.13.7. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 6.13.8. Modifying the network type By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead, change the value in the installation configuration file that the program generated. Prerequisites You have the file install-config.yaml that was generated by the OpenShift Container Platform installation program Procedure In a command prompt, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["networkType"] = "Kuryr"; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set networking.networkType to "Kuryr" . 6.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines, compute machine sets, and control plane machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 6.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 6.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 6.17. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" 6.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 6.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 6.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 6.22. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 6.23. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 6.24. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 6.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-containers.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"networkType\"] = \"Kuryr\"; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"openshift-install --log-level debug wait-for install-complete"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_openstack/installing-openstack-user-kuryr |
Chapter 7. Monitoring your cluster using JMX | Chapter 7. Monitoring your cluster using JMX ZooKeeper, the Kafka broker, Kafka Connect, and the Kafka clients all expose management information using Java Management Extensions (JMX). Most management information is in the form of metrics that are useful for monitoring the condition and performance of your Kafka cluster. Like other Java applications, Kafka provides this management information through managed beans or MBeans. JMX works at the level of the JVM (Java Virtual Machine). To obtain management information, external tools can connect to the JVM that is running ZooKeeper, the Kafka broker, and so on. By default, only tools on the same machine and running as the same user as the JVM are able to connect. Note Management information for ZooKeeper is not documented here. You can view ZooKeeper metrics in JConsole. For more information, see Monitoring using JConsole . 7.1. JMX configuration options You configure JMX using JVM system properties. The scripts provided with AMQ Streams ( bin/kafka-server-start.sh and bin/connect-distributed.sh , and so on) use the KAFKA_JMX_OPTS environment variable to set these system properties. The system properties for configuring JMX are the same, even though Kafka producer, consumer, and streams applications typically start the JVM in different ways. 7.2. Disabling the JMX agent You can prevent local JMX tools from connecting to the JVM (for example, for compliance reasons) by disabling the JMX agent for an AMQ Streams component. The following procedure explains how to disable the JMX agent for a Kafka broker. Procedure Use the KAFKA_JMX_OPTS environment variable to set com.sun.management.jmxremote to false . export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=false bin/kafka-server-start.sh Start the JVM. 7.3. Connecting to the JVM from a different machine You can connect to the JVM from a different machine by configuring the port that the JMX agent listens on. This is insecure because it allows JMX tools to connect from anywhere, with no authentication. Procedure Use the KAFKA_JMX_OPTS environment variable to set -Dcom.sun.management.jmxremote.port= <port> . For <port> , enter the name of the port on which you want the Kafka broker to listen for JMX connections. export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port= <port> -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false" bin/kafka-server-start.sh Start the JVM. Important It is recommended that you configure authentication and SSL to ensure that the remote JMX connection is secure. For more information about the system properties needed to do this, see the JMX documentation . 7.4. Monitoring using JConsole The JConsole tool is distributed with the Java Development Kit (JDK). You can use JConsole to connect to a local or remote JVM and discover and display management information from Java applications. If using JConsole to connect to a local JVM, the names of the JVM processes corresponding to the different components of AMQ Streams. Table 7.1. JVM processes for AMQ Streams components AMQ Streams component JVM process ZooKeeper org.apache.zookeeper.server.quorum.QuorumPeerMain Kafka broker kafka.Kafka Kafka Connect standalone org.apache.kafka.connect.cli.ConnectStandalone Kafka Connect distributed org.apache.kafka.connect.cli.ConnectDistributed A Kafka producer, consumer, or Streams application The name of the class containing the main method for the application. When using JConsole to connect to a remote JVM, use the appropriate host name and JMX port. Many other tools and monitoring products can be used to fetch the metrics using JMX and provide monitoring and alerting based on those metrics. Refer to the product documentation for those tools. 7.5. Important Kafka broker metrics Kafka provides many MBeans for monitoring the performance of the brokers in your Kafka cluster. These apply to an individual broker rather than the entire cluster. The following tables present a selection of these broker-level MBeans organized into server, network, logging, and controller metrics. 7.5.1. Kafka server metrics The following table shows a selection of metrics that report information about the Kafka server. Table 7.2. Metrics for the Kafka server Metric MBean Description Expected value Messages in per second kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec The rate at which individual messages are consumed by the broker. Approximately the same as the other brokers in the cluster. Bytes in per second kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec The rate at which data sent from producers is consumed by the broker. Approximately the same as the other brokers in the cluster. Replication bytes in per second kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesInPerSec The rate at which data sent from other brokers is consumed by the follower broker. N/A Bytes out per second kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec The rate at which data is fetched and read from the broker by consumers. N/A Replication bytes out per second kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesOutPerSec The rate at which data is sent from the broker to other brokers. This metric is useful to monitor if the broker is a leader for a group of partitions. N/A Under-replicated partitions kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions The number of partitions that have not been fully replicated in the follower replicas. Zero Under minimum ISR partition count kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount The number of partitions under the minimum In-Sync Replica (ISR) count. The ISR count indicates the set of replicas that are up-to-date with the leader. Zero Partition count kafka.server:type=ReplicaManager,name=PartitionCount The number of partitions in the broker. Approximately even when compared with the other brokers. Leader count kafka.server:type=ReplicaManager,name=LeaderCount The number of replicas for which this broker is the leader. Approximately the same as the other brokers in the cluster. ISR shrinks per second kafka.server:type=ReplicaManager,name=IsrShrinksPerSec The rate at which the number of ISRs in the broker decreases Zero ISR expands per second kafka.server:type=ReplicaManager,name=IsrExpandsPerSec The rate at which the number of ISRs in the broker increases. Zero Maximum lag kafka.server:type=ReplicaFetcherManager,name=MaxLag,clientId=Replica The maximum lag between the time that messages are received by the leader replica and by the follower replicas. Proportional to the maximum batch size of a produce request. Requests in producer purgatory kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce The number of send requests in the producer purgatory. N/A Requests in fetch purgatory kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Fetch The number of fetch requests in the fetch purgatory. N/A Request handler average idle percent kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent Indicates the percentage of time that the request handler (IO) threads are not in use. A lower value indicates that the workload of the broker is high. Request (Requests exempt from throttling) kafka.server:type=Request The number of requests that are exempt from throttling. N/A ZooKeeper request latency in milliseconds kafka.server:type=ZooKeeperClientMetrics,name=ZooKeeperRequestLatencyMs The latency for ZooKeeper requests from the broker, in milliseconds. N/A ZooKeeper session state kafka.server:type=SessionExpireListener,name=SessionState The status of the broker's connection to ZooKeeper. CONNECTED 7.5.2. Kafka network metrics The following table shows a selection of metrics that report information about requests. Metric MBean Description Expected value Requests per second kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce|FetchConsumer|FetchFollower} The total number of requests made for the request type per second. The Produce , FetchConsumer , and FetchFollower request types each have their own MBeans. N/A Request bytes (request size in bytes) kafka.network:type=RequestMetrics,name=RequestBytes,request=([-.\w]+) The size of requests, in bytes, made for the request type identified by the request property of the MBean name. Separate MBeans for all available request types are listed under the RequestBytes node. N/A Temporary memory size in bytes kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request={Produce|Fetch} The amount of temporary memory used for converting message formats and decompressing messages. N/A Message conversions time kafka.network:type=RequestMetrics,name=MessageConversionsTimeMs,request={Produce|Fetch} Time, in milliseconds, spent on converting message formats. N/A Total request time in milliseconds kafka.network:type=RequestMetrics,name=TotalTimeMs,request={Produce|FetchConsumer|FetchFollower} Total time, in milliseconds, spent processing requests. N/A Request queue time in milliseconds kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request={Produce|FetchConsumer|FetchFollower} The time, in milliseconds, that a request currently spends in the queue for the request type given in the request property. N/A Local time (leader local processing time) in milliseconds kafka.network:type=RequestMetrics,name=LocalTimeMs,request={Produce|FetchConsumer|FetchFollower} The time taken, in milliseconds, for the leader to process the request. N/A Remote time (leader remote processing time) in milliseconds kafka.network:type=RequestMetrics,name=RemoteTimeMs,request={Produce|FetchConsumer|FetchFollower} The length of time, in milliseconds, that the request waits for the follower. Separate MBeans for all available request types are listed under the RemoteTimeMs node. N/A Response queue time in milliseconds kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request={Produce|FetchConsumer|FetchFollower} The length of time, in milliseconds, that the request waits in the response queue. N/A Response send time in milliseconds kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request={Produce|FetchConsumer|FetchFollower} The time taken, in milliseconds, to send the response. N/A Network processor average idle percent kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent The average percentage of time that the network processors are idle. Between zero and one. 7.5.3. Kafka log metrics The following table shows a selection of metrics that report information about logging. Metric MBean Description Expected Value Log flush rate and time in milliseconds kafka.log:type=LogFlushStats,name=LogFlushRateAndTimeMs The rate at which log data is written to disk, in milliseconds. N/A Offline log directory count kafka.log:type=LogManager,name=OfflineLogDirectoryCount The number of offline log directories (for example, after a hardware failure). Zero 7.5.4. Kafka controller metrics The following table shows a selection of metrics that report information about the controller of the cluster. Metric MBean Description Expected Value Active controller count kafka.controller:type=KafkaController,name=ActiveControllerCount The number of brokers designated as controllers. One indicates that the broker is the controller for the cluster. Leader election rate and time in milliseconds kafka.controller:type=ControllerStats,name=LeaderElectionRateAndTimeMs The rate at which new leader replicas are elected. Zero 7.5.5. Yammer metrics Metrics that express a rate or unit of time are provided as Yammer metrics. The class name of an MBean that uses Yammer metrics is prefixed with com.yammer.metrics . Yammer rate metrics have the following attributes for monitoring requests: Count EventType (Bytes) FifteenMinuteRate RateUnit (Seconds) MeanRate OneMinuteRate FiveMinuteRate Yammer time metrics have the following attributes for monitoring requests: Max Min Mean StdDev 75/95/98/99/99.9 th Percentile 7.6. Producer MBeans The following MBeans will exist in Kafka producer applications, including Kafka Streams applications and Kafka Connect with source connectors. 7.6.1. MBeans matching kafka.producer:type=producer-metrics,client-id=* These are metrics at the producer level. Attribute Description batch-size-avg The average number of bytes sent per partition per-request. batch-size-max The max number of bytes sent per partition per-request. batch-split-rate The average number of batch splits per second. batch-split-total The total number of batch splits. buffer-available-bytes The total amount of buffer memory that is not being used (either unallocated or in the free list). buffer-total-bytes The maximum amount of buffer memory the client can use (whether or not it is currently used). bufferpool-wait-time The fraction of time an appender waits for space allocation. compression-rate-avg The average compression rate of record batches, defined as the average ratio of the compressed batch size over the uncompressed size. connection-close-rate Connections closed per second in the window. connection-count The current number of active connections. connection-creation-rate New connections established per second in the window. failed-authentication-rate Connections that failed authentication. incoming-byte-rate Bytes/second read off all sockets. io-ratio The fraction of time the I/O thread spent doing I/O. io-time-ns-avg The average length of time for I/O per select call in nanoseconds. io-wait-ratio The fraction of time the I/O thread spent waiting. io-wait-time-ns-avg The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. metadata-age The age in seconds of the current producer metadata being used. network-io-rate The average number of network operations (reads or writes) on all connections per second. outgoing-byte-rate The average number of outgoing bytes sent per second to all servers. produce-throttle-time-avg The average time in ms a request was throttled by a broker. produce-throttle-time-max The maximum time in ms a request was throttled by a broker. record-error-rate The average per-second number of record sends that resulted in errors. record-error-total The total number of record sends that resulted in errors. record-queue-time-avg The average time in ms record batches spent in the send buffer. record-queue-time-max The maximum time in ms record batches spent in the send buffer. record-retry-rate The average per-second number of retried record sends. record-retry-total The total number of retried record sends. record-send-rate The average number of records sent per second. record-send-total The total number of records sent. record-size-avg The average record size. record-size-max The maximum record size. records-per-request-avg The average number of records per request. request-latency-avg The average request latency in ms. request-latency-max The maximum request latency in ms. request-rate The average number of requests sent per second. request-size-avg The average size of all requests in the window. request-size-max The maximum size of any request sent in the window. requests-in-flight The current number of in-flight requests awaiting a response. response-rate Responses received sent per second. select-rate Number of times the I/O layer checked for new I/O to perform per second. successful-authentication-rate Connections that were successfully authenticated using SASL or SSL. waiting-threads The number of user threads blocked waiting for buffer memory to enqueue their records. 7.6.2. MBeans matching kafka.producer:type=producer-metrics,client-id=*,node-id=* These are metrics at the producer level about connection to each broker. Attribute Description incoming-byte-rate The average number of responses received per second for a node. outgoing-byte-rate The average number of outgoing bytes sent per second for a node. request-latency-avg The average request latency in ms for a node. request-latency-max The maximum request latency in ms for a node. request-rate The average number of requests sent per second for a node. request-size-avg The average size of all requests in the window for a node. request-size-max The maximum size of any request sent in the window for a node. response-rate Responses received sent per second for a node. 7.6.3. MBeans matching kafka.producer:type=producer-topic-metrics,client-id=*,topic=* These are metrics at the topic level about topics the producer is sending messages to. Attribute Description byte-rate The average number of bytes sent per second for a topic. byte-total The total number of bytes sent for a topic. compression-rate The average compression rate of record batches for a topic, defined as the average ratio of the compressed batch size over the uncompressed size. record-error-rate The average per-second number of record sends that resulted in errors for a topic. record-error-total The total number of record sends that resulted in errors for a topic. record-retry-rate The average per-second number of retried record sends for a topic. record-retry-total The total number of retried record sends for a topic. record-send-rate The average number of records sent per second for a topic. record-send-total The total number of records sent for a topic. 7.7. Consumer MBeans The following MBeans will exist in Kafka consumer applications, including Kafka Streams applications and Kafka Connect with sink connectors. 7.7.1. MBeans matching kafka.consumer:type=consumer-metrics,client-id=* These are metrics at the consumer level. Attribute Description connection-close-rate Connections closed per second in the window. connection-count The current number of active connections. connection-creation-rate New connections established per second in the window. failed-authentication-rate Connections that failed authentication. incoming-byte-rate Bytes/second read off all sockets. io-ratio The fraction of time the I/O thread spent doing I/O. io-time-ns-avg The average length of time for I/O per select call in nanoseconds. io-wait-ratio The fraction of time the I/O thread spent waiting. io-wait-time-ns-avg The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. network-io-rate The average number of network operations (reads or writes) on all connections per second. outgoing-byte-rate The average number of outgoing bytes sent per second to all servers. request-rate The average number of requests sent per second. request-size-avg The average size of all requests in the window. request-size-max The maximum size of any request sent in the window. response-rate Responses received sent per second. select-rate Number of times the I/O layer checked for new I/O to perform per second. successful-authentication-rate Connections that were successfully authenticated using SASL or SSL. 7.7.2. MBeans matching kafka.consumer:type=consumer-metrics,client-id=*,node-id=* These are metrics at the consumer level about connection to each broker. Attribute Description incoming-byte-rate The average number of responses received per second for a node. outgoing-byte-rate The average number of outgoing bytes sent per second for a node. request-latency-avg The average request latency in ms for a node. request-latency-max The maximum request latency in ms for a node. request-rate The average number of requests sent per second for a node. request-size-avg The average size of all requests in the window for a node. request-size-max The maximum size of any request sent in the window for a node. response-rate Responses received sent per second for a node. 7.7.3. MBeans matching kafka.consumer:type=consumer-coordinator-metrics,client-id=* These are metrics at the consumer level about the consumer group. Attribute Description assigned-partitions The number of partitions currently assigned to this consumer. commit-latency-avg The average time taken for a commit request. commit-latency-max The max time taken for a commit request. commit-rate The number of commit calls per second. heartbeat-rate The average number of heartbeats per second. heartbeat-response-time-max The max time taken to receive a response to a heartbeat request. join-rate The number of group joins per second. join-time-avg The average time taken for a group rejoin. join-time-max The max time taken for a group rejoin. last-heartbeat-seconds-ago The number of seconds since the last controller heartbeat. sync-rate The number of group syncs per second. sync-time-avg The average time taken for a group sync. sync-time-max The max time taken for a group sync. 7.7.4. MBeans matching kafka.consumer:type=consumer-fetch-manager-metrics,client-id=* These are metrics at the consumer level about the consumer's fetcher. Attribute Description bytes-consumed-rate The average number of bytes consumed per second. bytes-consumed-total The total number of bytes consumed. fetch-latency-avg The average time taken for a fetch request. fetch-latency-max The max time taken for any fetch request. fetch-rate The number of fetch requests per second. fetch-size-avg The average number of bytes fetched per request. fetch-size-max The maximum number of bytes fetched per request. fetch-throttle-time-avg The average throttle time in ms. fetch-throttle-time-max The maximum throttle time in ms. fetch-total The total number of fetch requests. records-consumed-rate The average number of records consumed per second. records-consumed-total The total number of records consumed. records-lag-max The maximum lag in terms of number of records for any partition in this window. records-lead-min The minimum lead in terms of number of records for any partition in this window. records-per-request-avg The average number of records in each request. 7.7.5. MBeans matching kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*,topic=* These are metrics at the topic level about the consumer's fetcher. Attribute Description bytes-consumed-rate The average number of bytes consumed per second for a topic. bytes-consumed-total The total number of bytes consumed for a topic. fetch-size-avg The average number of bytes fetched per request for a topic. fetch-size-max The maximum number of bytes fetched per request for a topic. records-consumed-rate The average number of records consumed per second for a topic. records-consumed-total The total number of records consumed for a topic. records-per-request-avg The average number of records in each request for a topic. 7.7.6. MBeans matching kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*,topic=*,partition=* These are metrics at the partition level about the consumer's fetcher. Attribute Description preferred-read-replica The current read replica for the partition, or -1 if reading from leader. records-lag The latest lag of the partition. records-lag-avg The average lag of the partition. records-lag-max The max lag of the partition. records-lead The latest lead of the partition. records-lead-avg The average lead of the partition. records-lead-min The min lead of the partition. 7.8. Kafka Connect MBeans Note Kafka Connect will contain the producer MBeans for source connectors and consumer MBeans for sink connectors in addition to those documented here. 7.8.1. MBeans matching kafka.connect:type=connect-metrics,client-id=* These are metrics at the connect level. Attribute Description connection-close-rate Connections closed per second in the window. connection-count The current number of active connections. connection-creation-rate New connections established per second in the window. failed-authentication-rate Connections that failed authentication. incoming-byte-rate Bytes/second read off all sockets. io-ratio The fraction of time the I/O thread spent doing I/O. io-time-ns-avg The average length of time for I/O per select call in nanoseconds. io-wait-ratio The fraction of time the I/O thread spent waiting. io-wait-time-ns-avg The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. network-io-rate The average number of network operations (reads or writes) on all connections per second. outgoing-byte-rate The average number of outgoing bytes sent per second to all servers. request-rate The average number of requests sent per second. request-size-avg The average size of all requests in the window. request-size-max The maximum size of any request sent in the window. response-rate Responses received sent per second. select-rate Number of times the I/O layer checked for new I/O to perform per second. successful-authentication-rate Connections that were successfully authenticated using SASL or SSL. 7.8.2. MBeans matching kafka.connect:type=connect-metrics,client-id=*,node-id=* These are metrics at the connect level about connection to each broker. Attribute Description incoming-byte-rate The average number of responses received per second for a node. outgoing-byte-rate The average number of outgoing bytes sent per second for a node. request-latency-avg The average request latency in ms for a node. request-latency-max The maximum request latency in ms for a node. request-rate The average number of requests sent per second for a node. request-size-avg The average size of all requests in the window for a node. request-size-max The maximum size of any request sent in the window for a node. response-rate Responses received sent per second for a node. 7.8.3. MBeans matching kafka.connect:type=connect-worker-metrics These are metrics at the connect level. Attribute Description connector-count The number of connectors run in this worker. connector-startup-attempts-total The total number of connector startups that this worker has attempted. connector-startup-failure-percentage The average percentage of this worker's connectors starts that failed. connector-startup-failure-total The total number of connector starts that failed. connector-startup-success-percentage The average percentage of this worker's connectors starts that succeeded. connector-startup-success-total The total number of connector starts that succeeded. task-count The number of tasks run in this worker. task-startup-attempts-total The total number of task startups that this worker has attempted. task-startup-failure-percentage The average percentage of this worker's tasks starts that failed. task-startup-failure-total The total number of task starts that failed. task-startup-success-percentage The average percentage of this worker's tasks starts that succeeded. task-startup-success-total The total number of task starts that succeeded. 7.8.4. MBeans matching kafka.connect:type=connect-worker-rebalance-metrics Attribute Description completed-rebalances-total The total number of rebalances completed by this worker. connect-protocol The Connect protocol used by this cluster. epoch The epoch or generation number of this worker. leader-name The name of the group leader. rebalance-avg-time-ms The average time in milliseconds spent by this worker to rebalance. rebalance-max-time-ms The maximum time in milliseconds spent by this worker to rebalance. rebalancing Whether this worker is currently rebalancing. time-since-last-rebalance-ms The time in milliseconds since this worker completed the most recent rebalance. 7.8.5. MBeans matching kafka.connect:type=connector-metrics,connector=* Attribute Description connector-class The name of the connector class. connector-type The type of the connector. One of 'source' or 'sink'. connector-version The version of the connector class, as reported by the connector. status The status of the connector. One of 'unassigned', 'running', 'paused', 'failed', or 'destroyed'. 7.8.6. MBeans matching kafka.connect:type=connector-task-metrics,connector=*,task=* Attribute Description batch-size-avg The average size of the batches processed by the connector. batch-size-max The maximum size of the batches processed by the connector. offset-commit-avg-time-ms The average time in milliseconds taken by this task to commit offsets. offset-commit-failure-percentage The average percentage of this task's offset commit attempts that failed. offset-commit-max-time-ms The maximum time in milliseconds taken by this task to commit offsets. offset-commit-success-percentage The average percentage of this task's offset commit attempts that succeeded. pause-ratio The fraction of time this task has spent in the pause state. running-ratio The fraction of time this task has spent in the running state. status The status of the connector task. One of 'unassigned', 'running', 'paused', 'failed', or 'destroyed'. 7.8.7. MBeans matching kafka.connect:type=sink-task-metrics,connector=*,task=* Attribute Description offset-commit-completion-rate The average per-second number of offset commit completions that were completed successfully. offset-commit-completion-total The total number of offset commit completions that were completed successfully. offset-commit-seq-no The current sequence number for offset commits. offset-commit-skip-rate The average per-second number of offset commit completions that were received too late and skipped/ignored. offset-commit-skip-total The total number of offset commit completions that were received too late and skipped/ignored. partition-count The number of topic partitions assigned to this task belonging to the named sink connector in this worker. put-batch-avg-time-ms The average time taken by this task to put a batch of sinks records. put-batch-max-time-ms The maximum time taken by this task to put a batch of sinks records. sink-record-active-count The number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task. sink-record-active-count-avg The average number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task. sink-record-active-count-max The maximum number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task. sink-record-lag-max The maximum lag in terms of number of records that the sink task is behind the consumer's position for any topic partitions. sink-record-read-rate The average per-second number of records read from Kafka for this task belonging to the named sink connector in this worker. This is before transformations are applied. sink-record-read-total The total number of records read from Kafka by this task belonging to the named sink connector in this worker, since the task was last restarted. sink-record-send-rate The average per-second number of records output from the transformations and sent/put to this task belonging to the named sink connector in this worker. This is after transformations are applied and excludes any records filtered out by the transformations. sink-record-send-total The total number of records output from the transformations and sent/put to this task belonging to the named sink connector in this worker, since the task was last restarted. 7.8.8. MBeans matching kafka.connect:type=source-task-metrics,connector=*,task=* Attribute Description poll-batch-avg-time-ms The average time in milliseconds taken by this task to poll for a batch of source records. poll-batch-max-time-ms The maximum time in milliseconds taken by this task to poll for a batch of source records. source-record-active-count The number of records that have been produced by this task but not yet completely written to Kafka. source-record-active-count-avg The average number of records that have been produced by this task but not yet completely written to Kafka. source-record-active-count-max The maximum number of records that have been produced by this task but not yet completely written to Kafka. source-record-poll-rate The average per-second number of records produced/polled (before transformation) by this task belonging to the named source connector in this worker. source-record-poll-total The total number of records produced/polled (before transformation) by this task belonging to the named source connector in this worker. source-record-write-rate The average per-second number of records output from the transformations and written to Kafka for this task belonging to the named source connector in this worker. This is after transformations are applied and excludes any records filtered out by the transformations. source-record-write-total The number of records output from the transformations and written to Kafka for this task belonging to the named source connector in this worker, since the task was last restarted. 7.8.9. MBeans matching kafka.connect:type=task-error-metrics,connector=*,task=* Attribute Description deadletterqueue-produce-failures The number of failed writes to the dead letter queue. deadletterqueue-produce-requests The number of attempted writes to the dead letter queue. last-error-timestamp The epoch timestamp when this task last encountered an error. total-errors-logged The number of errors that were logged. total-record-errors The number of record processing errors in this task. total-record-failures The number of record processing failures in this task. total-records-skipped The number of records skipped due to errors. total-retries The number of operations retried. 7.9. Kafka Streams MBeans Note A Streams application will contain the producer and consumer MBeans in addition to those documented here. 7.9.1. MBeans matching kafka.streams:type=stream-metrics,client-id=* These metrics are collected when the metrics.recording.level configuration parameter is info or debug . Attribute Description commit-latency-avg The average execution time in ms for committing, across all running tasks of this thread. commit-latency-max The maximum execution time in ms for committing across all running tasks of this thread. commit-rate The average number of commits per second. commit-total The total number of commit calls across all tasks. poll-latency-avg The average execution time in ms for polling, across all running tasks of this thread. poll-latency-max The maximum execution time in ms for polling across all running tasks of this thread. poll-rate The average number of polls per second. poll-total The total number of poll calls across all tasks. process-latency-avg The average execution time in ms for processing, across all running tasks of this thread. process-latency-max The maximum execution time in ms for processing across all running tasks of this thread. process-rate The average number of process calls per second. process-total The total number of process calls across all tasks. punctuate-latency-avg The average execution time in ms for punctuating, across all running tasks of this thread. punctuate-latency-max The maximum execution time in ms for punctuating across all running tasks of this thread. punctuate-rate The average number of punctuates per second. punctuate-total The total number of punctuate calls across all tasks. skipped-records-rate The average number of skipped records per second. skipped-records-total The total number of skipped records. task-closed-rate The average number of tasks closed per second. task-closed-total The total number of tasks closed. task-created-rate The average number of newly created tasks per second. task-created-total The total number of tasks created. 7.9.2. MBeans matching kafka.streams:type=stream-task-metrics,client-id=*,task-id=* Task metrics. These metrics are collected when the metrics.recording.level configuration parameter is debug . Attribute Description commit-latency-avg The average commit time in ns for this task. commit-latency-max The maximum commit time in ns for this task. commit-rate The average number of commit calls per second. commit-total The total number of commit calls. 7.9.3. MBeans matching kafka.streams:type=stream-processor-node-metrics,client-id=*,task-id=*,processor-node-id=* Processor node metrics. These metrics are collected when the metrics.recording.level configuration parameter is debug . Attribute Description create-latency-avg The average create execution time in ns. create-latency-max The maximum create execution time in ns. create-rate The average number of create operations per second. create-total The total number of create operations called. destroy-latency-avg The average destroy execution time in ns. destroy-latency-max The maximum destroy execution time in ns. destroy-rate The average number of destroy operations per second. destroy-total The total number of destroy operations called. forward-rate The average rate of records being forwarded downstream, from source nodes only, per second. forward-total The total number of of records being forwarded downstream, from source nodes only. process-latency-avg The average process execution time in ns. process-latency-max The maximum process execution time in ns. process-rate The average number of process operations per second. process-total The total number of process operations called. punctuate-latency-avg The average punctuate execution time in ns. punctuate-latency-max The maximum punctuate execution time in ns. punctuate-rate The average number of punctuate operations per second. punctuate-total The total number of punctuate operations called. 7.9.4. MBeans matching kafka.streams:type=stream-[store-scope]-metrics,client-id=*,task-id=*,[store-scope]-id=* State store metrics. These metrics are collected when the metrics.recording.level configuration parameter is debug . Attribute Description all-latency-avg The average all operation execution time in ns. all-latency-max The maximum all operation execution time in ns. all-rate The average all operation rate for this store. all-total The total number of all operation calls for this store. delete-latency-avg The average delete execution time in ns. delete-latency-max The maximum delete execution time in ns. delete-rate The average delete rate for this store. delete-total The total number of delete calls for this store. flush-latency-avg The average flush execution time in ns. flush-latency-max The maximum flush execution time in ns. flush-rate The average flush rate for this store. flush-total The total number of flush calls for this store. get-latency-avg The average get execution time in ns. get-latency-max The maximum get execution time in ns. get-rate The average get rate for this store. get-total The total number of get calls for this store. put-all-latency-avg The average put-all execution time in ns. put-all-latency-max The maximum put-all execution time in ns. put-all-rate The average put-all rate for this store. put-all-total The total number of put-all calls for this store. put-if-absent-latency-avg The average put-if-absent execution time in ns. put-if-absent-latency-max The maximum put-if-absent execution time in ns. put-if-absent-rate The average put-if-absent rate for this store. put-if-absent-total The total number of put-if-absent calls for this store. put-latency-avg The average put execution time in ns. put-latency-max The maximum put execution time in ns. put-rate The average put rate for this store. put-total The total number of put calls for this store. range-latency-avg The average range execution time in ns. range-latency-max The maximum range execution time in ns. range-rate The average range rate for this store. range-total The total number of range calls for this store. restore-latency-avg The average restore execution time in ns. restore-latency-max The maximum restore execution time in ns. restore-rate The average restore rate for this store. restore-total The total number of restore calls for this store. 7.9.5. MBeans matching kafka.streams:type=stream-record-cache-metrics,client-id=*,task-id=*,record-cache-id=* Record cache metrics. These metrics are collected when the metrics.recording.level configuration parameter is debug . Attribute Description hitRatio-avg The average cache hit ratio defined as the ratio of cache read hits over the total cache read requests. hitRatio-max The maximum cache hit ratio. hitRatio-min The mininum cache hit ratio. | [
"export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=false bin/kafka-server-start.sh",
"export KAFKA_JMX_OPTS=\"-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port= <port> -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false\" bin/kafka-server-start.sh"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/monitoring-str |
Appendix F. Application development resources | Appendix F. Application development resources For additional information about application development with OpenShift, see: OpenShift Interactive Learning Portal | null | https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/node.js_runtime_guide/application-development-resources |
8.110. man-pages-fr | 8.110. man-pages-fr 8.110.1. RHBA-2013:1093 - man-pages-fr bug fix update Updated man-pages-fr packages that fix one bug are now available. The man-pages-fr packages contain manual pages in French. Bug Fix BZ# 903048 Due to some problem in the build system of the French manual page package man-pages-fr, some manual pages were not included in the package. Some manual pages, for example the manual page of "echo" were displayed in English even when the system was running in a French locale. Thus, the command "man echo" displayed an English manual page. The build problem in the man-pages-fr package is fixed, and the missing manual pages are now included. Hence, manual pages are now displayed in French when the system is running in a French locale, for example "man echo" now shows a French manual page. Users of man-pages-fr are advised to upgrade to these updated packages, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/man-pages-fr |
Chapter 45. IBM WebSphere Application Server | Chapter 45. IBM WebSphere Application Server IBM WebSphere Application Server is a flexible and secure web application server that hosts Java-based web applications and provides Java EE-certified run time environments. IBM WebSphere 9.0 supports Java SE 8 and is fully compliant with Java EE 7. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/was-con |
Chapter 3. Deploy standalone Multicloud Object Gateway | Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 3.2. Creating standalone Multicloud Object Gateway Use this section to create only the Multicloud Object Gateway component with OpenShift Data Foundation. Prerequisites Ensure that OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. Ensure that you have a storage class and is set as the default. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, expand Advanced . Select Multicloud Object Gateway for Deployment type . Click . Optional: In the Security page, select Connect to an external key management service . Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> '), Port number , and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verify the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector="
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_amazon_web_services/deploy-standalone-multicloud-object-gateway |
Chapter 2. Deleting a collection on automation hub | Chapter 2. Deleting a collection on automation hub You can further manage your collections by deleting unwanted collections, provided that the collection is not dependent on other collections. Click the Dependencies tab on a collection to see a list of other collections that uses the current collection. Prerequisites The collection being deleted does not have dependencies with other collections. You have Delete Collections permissions Procedure Log in to Red Hat Ansible Automation Platform. Navigate to Automation Hub Collections . Click a collection to delete. Click then select an option: Delete entire collection to delete all versions in this collection. Delete version [number] to delete the current version of this collection. You can change versions using the Version dropdown menu. Note If the selected collection has any dependencies with other collections, these actions will be unavailable to you until you delete those dependencies. Click the Dependencies tab to see a list of dependencies to delete before you proceed. When the confirmation window appears, verify that the collection or version number is correct, then click the checkbox Delete . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/uploading_content_to_red_hat_automation_hub/delete-collection |
Chapter 10. Installing a cluster on Azure into a government region | Chapter 10. Installing a cluster on Azure into a government region In OpenShift Container Platform version 4.13, you can install a cluster on Microsoft Azure into a government region. To configure the government region, you modify parameters in the install-config.yaml file before you install the cluster. 10.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated government region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 10.2. Azure government regions OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization . Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment. Note The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud , based on the region specified. 10.3. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 10.3.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 10.3.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 10.3.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An OpenShift image registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 10.4. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.13, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 10.4.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 10.4.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 10.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 10.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 10.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin 10.4.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 10.4.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 10.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 10.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 10.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 10.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 10.8.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 10.8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 10.4. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 10.8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 10.5. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 10.8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 10.6. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 10.8.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 10.7. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 10.8.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 10.8. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 10.8.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 10.1. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 10.8.4. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: usgovvirginia resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: UserDefinedRouting 18 cloudName: AzureUSGovernmentCloud 19 pullSecret: '{"auths": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 1 10 20 Required. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 Specify the name of the resource group that contains the DNS zone for your base domain. 13 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 14 If you use an existing VNet, specify the name of the resource group that contains it. 15 If you use an existing VNet, specify its name. 16 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 17 If you use an existing VNet, specify the name of the subnet to host the compute machines. 18 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 19 Specify the name of the Azure cloud environment to deploy your cluster to. Set AzureUSGovernmentCloud to deploy to a Microsoft Azure Government (MAG) region. The default value is AzurePublicCloud . 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 10.8.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 10.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 10.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 10.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 10.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 10.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: usgovvirginia resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: UserDefinedRouting 18 cloudName: AzureUSGovernmentCloud 19 pullSecret: '{\"auths\": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_azure/installing-azure-government-region |
4.173. matahari | 4.173. matahari 4.173.1. RHBA-2011:1569 - matahari bug fix and enhancement update Updated matahari packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The matahari packages provide a set of APIs for operating system management that are exposed to remote access over the Qpid Management Framework (QMF). Bug Fixes BZ# 688193 Prior to this update, the matahari services agent could not monitor the status of a system service. As a consequence, matahari could not be used in high-availability (HA) environments where status monitoring is a requirement. With this update, the user of the services agent can specify the frequency for the status check and the matahari services agent can now provide service health information for applications such as HA. BZ# 714249 Prior to this update, the wrong CPU core count was returned when requesting the CPU core count from the matahari host agent. With this update, matahari and the supporting library, sigar, have been modified to ensure that the core count is not improperly affected by hyperthreading support. Now, the expected CPU core count is returned. BZ# 729063 Prior to this update, the host agent included only time related metadata when producing heartbeat events. As a consequence, it was problematic to associate heartbeat events with the host they originated from, especially in logs. With this update, the heartbeat events produced by the Host agent include the hostname and the hardware's Universally Unique Identifier (UUID) as additional metadata. Now, it is easier to associate the host agent heartbeat events with the host they originated from. BZ# 732498 Prior to this update, the data address for matahari QMF objects was inconsistent. As a consequence, the data address for some agents was the class name, for others it was a UUID. This update uses consistently the class name as the data address. Now, the data address across all matahari agents is consistent. Enhancements BZ# 663468 Prior to this update, matahari only supported IBM eServer xSeries 366, AMD64 and Intel 64 architectures. This update adds support for PowerPC and IBM System z architectures as a Technology Preview. BZ# 688181 This update adds support for QMF to allow for kerberos authentication. BZ# 688191 With this update, matahari includes an agent for system configuration to support updating the system configuration with both puppet and augeas. BZ# 735419 Prior to this update, users could only specify a hostname or IP address. As a consequence, a dynamically updated list of brokers to connect to was not provided. With this update, matahari supports querying for DNS SRV records to determine the broker, or the list of brokers to connect to. Now administrators can use DNS SRV to control where matahari agents connect to. All users of matahari are advised to install these packages, which fix these bugs and add these enhancements. 4.173.2. RHBA-2012:0511 - matahari bug fix update Updated matahari packages that fix one bug are now available for Red Hat Enterprise Linux 6. The matahari packages provide a set of APIs for operating system management that are exposed to remote access over the Qpid Management Framework (QMF). Bug Fix BZ# 806766 Qpid APIs using the libpidclient and libqpidcommon libraries are not application binary interface (ABI) stable. These dependencies have been removed so that Qpid rebuilds do not affect the matahari packages. All users of matahari are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/matahari |
3.8. Cache Hints and Options | 3.8. Cache Hints and Options A query cache hint can be used in the following ways: Indicate that a user query is eligible for result set caching and set the cache entry memory preference, time to live and so forth. Set the materialized view memory preference, time to live, or updatablity. Indicate that a virtual procedure should be cachable and set the cache entry memory preference, time to live and so on The cache hint should appear at the beginning of the SQL. It will not have any affect on INSERT/UPDATE/DELETE statements or INSTEAD OF TRIGGERS. pref_mem- if present indicates that the cached results should prefer to remain in memory. The results may still be paged out based upon memory pressure. Important Care should be taken to not over use the pref_mem option. The memory preference is implemented with Java soft references. While soft references are effective at preventing out of memory conditions. Too much memory held by soft references can limit the effective working memory. Consult your JVM options for clearing soft references if you need to tune their behavior. ttl:n- if present n indicates the time to live value in milliseconds. The default value for result set caching is the default expiration for the corresponding Infinispan cache. There is no default time to live for materialized views. updatable- if present indicates that the cached results can be updated. This defaults to false for materialized views and to true for result set cache entries. scope- There are three different cache scopes: session - cached only for current session, user - cached for any session by the current user, vdb - cached for any user connected to the same vdb. For cached queries the presense of the scope overrides the computed scope. Materialized views on the other hand default to the vdb scope. For materialized views explicitly setting the session or user scopes will result in a non-replicated session scoped materialized view. The pref_mem, ttl, updatable, and scope values for a materialized view may also be set via extension properties on the view (by using the teiid_rel namespace with MATVIEW_PREFER_MEMORY, MATVIEW_TTL, MATVIEW_UPDATABLE, and MATVIEW_SCOPE respectively). If both are present, the use of an extension property supersedes the usage of the cache hint. Note The form of the query hint must be matched exactly for the hint to be effective. For a user query if the hint is not specified correctly, e.g. /*+ cach(pref_mem) */, it will not be used by the engine nor will there be an informational log. It is currently recommended that you verify in your testing that the user command in the query plan has retained the proper hint. Individual queries may override the use of cached results by specifying OPTION NOCACHE on the query. 0 or more fully qualified view or procedure names may be specified to exclude using their cached results. If no names are specified, cached results will not be used transitively. In this case, no cached results will be used at all: In this case, only the vg1 and vg3 caches will be skipped. vg2 or any cached results nested under vg1 and vg3 will be used: OPTION NOCACHE may be specified in procedure or view definitions. In that way, transformations can specify to always use real-time data obtained directly from sources. | [
"/*+ cache[([pref_mem] [ttl:n] [updatable])] [scope:(session|user|vdb)] */ sql",
"SELECT * from vg1, vg2, vg3 WHERE ... OPTION NOCACHE",
"SELECT * from vg1, vg2, vg3 WHERE ... OPTION NOCACHE vg1, vg3"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/cache_hints_and_options |
Chapter 16. Hardware Configuration | Chapter 16. Hardware Configuration 16.1. Tablets 16.1.1. Adding Support for a New Tablet libwacom is a tablet information client library storing data about Wacom models. This library is used by both the gnome-settings-daemon component and the Wacom Tablet settings panel in GNOME. To add support for a new tablet into libwacom , a new tablet definition file must be created. Tablet definition files are included in the libwacom-data package. If this package is installed, the tablet definition files are then locally available in the /usr/share/libwacom/ directory. To use the screen mapping correctly, support for your tablet must be included in the libwacom database and udev rules file. Important A common indicator that a device is not supported by libwacom is that it works normally in a GNOME session, but the device is not correctly mapped to the screen. Procedure 16.1. How to add tablet descriptions Use the libwacom-list-local-devices tool to list all local devices recognized by libwacom . If your device is not listed, but it is available as an event device in the kernel (see /proc/bus/input/devices ) and in the X session (see xinput list), the device is missing from libwacom 's database. Create a new tablet definition file. Use data/wacom.example below and edit the respective lines. Note The new .tablet file may already be available, so check the upstream repository first at https://sourceforge.net/p/linuxwacom/libwacom/ci/master/tree/ . If you find your tablet model on the list, it is sufficient to copy the file to the local machine. Add and install the new file with the .tablet suffix: Once installed, the tablet is part of libwacom 's database. The tablet is then available through libwacom-list-local-devices . Create a new file /etc/udev/rules/99-libwacom-override.rules with the following content so that your settings are not overwritten: Reboot your system. 16.1.2. Where Is the Wacom Tablet Configuration Stored? Configuration for your Wacom tablet is stored in GSettings in the /org/gnome/settings-daemon/peripherals/wacom/ machine-id - device-id key, where machine-id is a D-Bus machine ID, and device-id is a tablet device ID. The configuration schema for the tablet is org.gnome.settings-daemon.peripherals.wacom . Similarly, stylus configuration is stored in the /org/gnome/settings-daemon/peripherals/wacom/ device-id / tool-id key, where tool-id is the identifier for the stylus used for professional ranges. For the consumer ranges with no support for tool-id , a generic identifier is used instead. The configuration schema for the stylus is org.gnome.settings-daemon.peripherals.wacom.stylus , and for the eraser org.gnome.settings-daemon.peripherals.wacom.eraser . To get the full list of tablet configuration paths used on a particular machine, you can use the gsd-list-wacom tool, which is provided by the gnome-settings-daemon-devel package. To verify that the gnome-settings-daemon-devel package is installed on the system, make sure that the system is subscribed to the Optional channel, and run the following command: To learn how to subscribe the system to the Optional channel, read the following resource: https://access.redhat.com/solutions/392003 After verifying that the package is installed, run the following command: Note that using machine-id , device-id , and tool-id in configuration paths allows for shared home directories with independent tablet configuration per machine. 16.1.3. When Sharing Home Directories Between Machines, the Wacom Settings Only Apply to One Machine This is because the D-Bus machine ID ( machine-id ) for your Wacom tablet is included in the configuration path of the /org/gnome/settings-daemon/peripherals/wacom/ machine-id - device-id GSettings key, which stores your tablet settings. | [
"Example model file description for a tablet The product is the product name announced by the kernel Product=Intuos 4 WL 6x9 Vendor name of this tablet Vendor=Wacom DeviceMatch includes the bus (usb, serial), the vendor ID and the actual product ID DeviceMatch=usb:056a:00bc Class of the tablet. Valid classes include Intuos3, Intuos4, Graphire, Bamboo, Cintiq Class=Intuos4 Exact model of the tablet, not including the size. Model=Intuos 4 Wireless Width in inches, as advertised by the manufacturer Width=9 Height in inches, as advertised by the manufacturer Height=6 Optional features that this tablet supports Some features are dependent on the actual tool used, e.g. not all styli have an eraser and some styli have additional custom axes (e.g. the airbrush pen). These features describe those available on the tablet. # Features not set in a file default to false/0 This tablet supports styli (and erasers, if present on the actual stylus) Stylus=true This tablet supports touch. Touch=false This tablet has a touch ring (Intuos4 and Cintiq 24HD) Ring=true This tablet has a second touch ring (Cintiq 24HD) Ring2=false This tablet has a vertical/horizontal scroll strip VStrip=false HStrip=false Number of buttons on the tablet Buttons=9 This tablet is built-in (most serial tablets, Cintiqs) BuiltIn=false",
"cp the-new-file.tablet /usr/share/libwacom/",
"ACTION!=\"add|change\", GOTO=\"libwacom_end\" KERNEL!=\"event[0-9]*\", GOTO=\"libwacom_end\" [new tablet match entries go here] LABEL=\"libwacom_end\"",
"yum install gnome-settings-daemon-devel",
"/usr/libexec/gsd-list-wacom"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/input-devices-configuration |
Chapter 1. Preparing to install on vSphere | Chapter 1. Preparing to install on vSphere 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use Telemetry, you configured the firewall to allow the sites required by your cluster. You reviewed your VMware platform licenses. Red Hat does not place any restrictions on your VMware licenses, but some VMware infrastructure components require licensing. 1.2. Choosing a method to install OpenShift Container Platform on vSphere You can install OpenShift Container Platform with the Assisted Installer . This method requires no setup for the installer, and is ideal for connected environments like vSphere. Installing with the Assisted Installer also provides integration with vSphere, enabling autoscaling. See Installing an on-premise cluster using the Assisted Installer for additional details. You can also install OpenShift Container Platform on vSphere by using installer-provisioned or user-provisioned infrastructure. Installer-provisioned infrastructure is ideal for installing in environments with air-gapped/restricted networks, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provide. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See the Installation process for more information about installer-provisioned and user-provisioned installation processes. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 1.2.1. Installer-provisioned infrastructure installation of OpenShift Container Platform on vSphere Installer-provisioned infrastructure allows the installation program to preconfigure and automate the provisioning of resources required by OpenShift Container Platform. Installing a cluster on vSphere : You can install OpenShift Container Platform on vSphere by using installer-provisioned infrastructure installation with no customization. Installing a cluster on vSphere with customizations : You can install OpenShift Container Platform on vSphere by using installer-provisioned infrastructure installation with the default customization options. Installing a cluster on vSphere with network customizations : You can install OpenShift Container Platform on installer-provisioned vSphere infrastructure, with network customizations. You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on vSphere in a restricted network : You can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 1.2.2. User-provisioned infrastructure installation of OpenShift Container Platform on vSphere User-provisioned infrastructure requires the user to provision all resources required by OpenShift Container Platform. Installing a cluster on vSphere with user-provisioned infrastructure : You can install OpenShift Container Platform on VMware vSphere infrastructure that you provision. Installing a cluster on vSphere with network customizations with user-provisioned infrastructure : You can install OpenShift Container Platform on VMware vSphere infrastructure that you provision with customized network configuration options. Installing a cluster on vSphere in a restricted network with user-provisioned infrastructure : OpenShift Container Platform can be installed on VMware vSphere infrastructure that you provision in a restricted network. 1.3. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 1.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Table 1.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 and later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . CPU micro-architecture x86-64-v2 or higher OpenShift 4.13 and later are based on RHEL 9.2 host operating system which raised the microarchitecture requirements to x86-64-v2. See the RHEL Microarchitecture requirements documentation . You can verify compatibility by following the procedures outlined in this KCS article . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Additional resources For more information about CSI automatic migration, see "Overview" in VMware vSphere CSI Driver Operator . 1.4. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . 1.5. Configuring the vSphere connection settings Updating the vSphere connection settings following an installation : For installations on vSphere using the Assisted Installer, you must manually update the vSphere connection settings to complete the installation. For installer-provisioned or user-provisioned infrastructure installations on vSphere, you can optionally validate or modify the vSphere connection settings at any time. 1.6. Uninstalling an installer-provisioned infrastructure installation of OpenShift Container Platform on vSphere Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure : You can remove a cluster that you deployed on VMware vSphere infrastructure that used installer-provisioned infrastructure. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_vsphere/preparing-to-install-on-vsphere |
Appendix A. The Ceph RESTful API specifications | Appendix A. The Ceph RESTful API specifications As a storage administrator, you can access the various Ceph sub-systems through the Ceph RESTful API endpoints. This is a reference guide for the available Ceph RESTful API methods. The available Ceph API endpoints: Section A.2, "Ceph summary" Section A.3, "Authentication" Section A.4, "Ceph File System" Section A.5, "Storage cluster configuration" Section A.6, "CRUSH rules" Section A.7, "Erasure code profiles" Section A.8, "Feature toggles" Section A.9, "Grafana" Section A.10, "Storage cluster health" Section A.11, "Host" Section A.12, "iSCSI" Section A.13, "Logs" Section A.14, "Ceph Manager modules" Section A.15, "Ceph Monitor" Section A.16, "Ceph OSD" Section A.17, "Ceph Object Gateway" Section A.18, "REST APIs for manipulating a role" Section A.19, "NFS Ganesha" Section A.20, "Ceph Orchestrator" Section A.21, "Pools" Section A.22, "Prometheus" Section A.23, "RADOS block device" Section A.24, "Performance counters" Section A.25, "Roles" Section A.26, "Services" Section A.27, "Settings" Section A.28, "Ceph task" Section A.29, "Telemetry" Section A.30, "Ceph users" A.1. Prerequisites An understanding of how to use a RESTful API. A healthy running Red Hat Ceph Storage cluster. The Ceph Manager dashboard module is enabled. A.2. Ceph summary The method reference for using the Ceph RESTful API summary endpoint to display the Ceph summary details. GET /api/summary Description Display a summary of Ceph details. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.3. Authentication The method reference for using the Ceph RESTful API auth endpoint to initiate a session with Red Hat Ceph Storage. POST /api/auth Curl Example Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/auth/check Description Check the requirement for an authentication token. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/auth/logout Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.4. Ceph File System The method reference for using the Ceph RESTful API cephfs endpoint to manage Ceph File Systems (CephFS). GET /api/cephfs Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID Parameters Replace FS_ID with the Ceph File System identifier string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/cephfs/ FS_ID /client/ CLIENT_ID Parameters Replace FS_ID with the Ceph File System identifier string. Replace CLIENT_ID with the Ceph client identifier string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID /clients Parameters Replace FS_ID with the Ceph File System identifier string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID /get_root_directory Description The root directory that can not be fetched using the ls_dir API call. Parameters Replace FS_ID with the Ceph File System identifier string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID /ls_dir Description List directories for a given path. Parameters Replace FS_ID with the Ceph File System identifier string. Queries: path - The string value where you want to start the listing. The default path is / , if not given. depth - An integer value specifying the number of steps to go down the directory tree. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID /mds_counters Parameters Replace FS_ID with the Ceph File System identifier string. Queries: counters - An integer value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID /quota Description Display the CephFS quotas for the given path. Parameters Replace FS_ID with the Ceph File System identifier string. Queries: path - A required string value specifying the directory path. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/cephfs/ FS_ID /quota Description Sets the quota for a given path. Parameters Replace FS_ID with the Ceph File System identifier string. max_bytes - A string value defining the byte limit. max_files - A string value defining the file limit. path - A string value defining the path to the directory or file. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/cephfs/ FS_ID /snapshot Description Remove a snapsnot. Parameters Replace FS_ID with the Ceph File System identifier string. Queries: name - A required string value specifying the snapshot name. path - A required string value defining the path to the directory. Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/cephfs/ FS_ID /snapshot Description Create a snapshot. Parameters Replace FS_ID with the Ceph File System identifier string. name - A string value specifying the snapshot name. If no name is specified, then a name using the current time in RFC3339 UTC format is generated. path - A string value defining the path to the directory. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/cephfs/ FS_ID /tree Description Remove a directory. Parameters Replace FS_ID with the Ceph File System identifier string. Queries: path - A required string value defining the path to the directory. Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/cephfs/ FS_ID /tree Description Creates a directory. Parameters Replace FS_ID with the Ceph File System identifier string. path - A string value defining the path to the directory. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.5. Storage cluster configuration The method reference for using the Ceph RESTful API cluster_conf endpoint to manage the Red Hat Ceph Storage cluster. GET /api/cluster_conf Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/cluster_conf Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/cluster_conf Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cluster_conf/filter Description Display the storage cluster configuration by name. Parameters Queries: names - A string value for the configuration option names. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/cluster_conf/ NAME Parameters Replace NAME with the storage cluster configuration name. Queries: section - A required string value. Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cluster_conf/ NAME Parameters Replace NAME with the storage cluster configuration name. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.6. CRUSH rules The method reference for using the Ceph RESTful API crush_rule endpoint to manage the CRUSH rules. GET /api/crush_rule Description List the CRUSH rule configuration. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/crush_rule Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/crush_rule/ NAME Parameters Replace NAME with the rule name. Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/crush_rule/ NAME Parameters Replace NAME with the rule name. Example Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.7. Erasure code profiles The method reference for using the Ceph RESTful API erasure_code_profile endpoint to manage the profiles for erasure coding. GET /api/erasure_code_profile Description List erasure-coded profile information. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/erasure_code_profile Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/erasure_code_profile/ NAME Parameters Replace NAME with the profile name. Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/erasure_code_profile/ NAME Parameters Replace NAME with the profile name. Example Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.8. Feature toggles The method reference for using the Ceph RESTful API feature_toggles endpoint to manage the CRUSH rules. GET /api/feature_toggles Description List the features of Red Hat Ceph Storage. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.9. Grafana The method reference for using the Ceph RESTful API grafana endpoint to manage Grafana. POST /api/grafana/dashboards Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/grafana/url Description List the Grafana URL instance. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/grafana/validation/ PARAMS Parameters Replace PARAMS with a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.10. Storage cluster health The method reference for using the Ceph RESTful API health endpoint to display the storage cluster health details and status. GET /api/health/full Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/health/minimal Description Display the storage cluster's minimal health report. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.11. Host The method reference for using the Ceph RESTful API host endpoint to display host, also known as node, information. GET /api/host Description List the host specifications. Parameters Queries: sources - A string value of host sources. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/host Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/host/ HOST_NAME Parameters Replace HOST_NAME with the name of the node. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/host/ HOST_NAME Description Displays information on the given host. Parameters Replace HOST_NAME with the name of the node. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/host/ HOST_NAME Description Updates information for the given host. This method is only supported when the Ceph Orchestrator is enabled. Parameters Replace HOST_NAME with the name of the node. force - Force the host to enter maintenance mode. labels - A list of labels. maintenance - Enter or exit maintenance mode. update_labels - Updates the labels. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/host/ HOST_NAME /daemons Parameters Replace HOST_NAME with the name of the node. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/host/ HOST_NAME /devices Parameters Replace HOST_NAME with the name of the node. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/host/ HOST_NAME /identify_device Description Identify a device by switching on the device's light for a specified number of seconds. Parameters Replace HOST_NAME with the name of the node. device - The device id, such as, /dev/dm-0 or ABC1234DEF567-1R1234_ABC8DE0Q . duration - The number of seconds the device's LED should flash. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/host/ HOST_NAME /inventory Description Display the inventory of the host. Parameters Replace HOST_NAME with the name of the node. Queries: refresh - A string value to trigger an asynchronous refresh. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/host/ HOST_NAME /smart Parameters Replace HOST_NAME with the name of the node. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.12. iSCSI The method reference for using the Ceph RESTful API iscsi endpoint to manage iSCSI. GET /api/iscsi/discoveryauth Description View the iSCSI discovery authentication details. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/iscsi/discoveryauth Description Set the iSCSI discovery authentication. Parameters Queries: user - The required user name string. password - The required password string. mutual_user - The required mutual user name string. mutual_password - The required mutual password string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/iscsi/target Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/iscsi/target Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/iscsi/target/ TARGET_IQN Parameters Replace TARGET_IQN with a path string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/iscsi/target/ TARGET_IQN Parameters Replace TARGET_IQN with a path string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/iscsi/target/ TARGET_IQN Parameters Replace TARGET_IQN with a path string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.13. Logs The method reference for using the Ceph RESTful API logs endpoint to display log information. GET /api/logs/all Description View all the log configuration. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.14. Ceph Manager modules The method reference for using the Ceph RESTful API mgr/module endpoint to manage the Ceph Manager modules. GET /api/mgr/module Description View the list of managed modules. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/mgr/module/ MODULE_NAME Description Retrieve the values of the persistent configuration settings. Parameters Replace MODULE_NAME with the Ceph Manager module name. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/mgr/module/ MODULE_NAME Description Set the values of the persistent configuration settings. Parameters Replace MODULE_NAME with the Ceph Manager module name. config - The values of the module options. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/mgr/module/ MODULE_NAME /disable Description Disable the given Ceph Manager module. Parameters Replace MODULE_NAME with the Ceph Manager module name. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/mgr/module/ MODULE_NAME /enable Description Enable the given Ceph Manager module. Parameters Replace MODULE_NAME with the Ceph Manager module name. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/mgr/module/ MODULE_NAME /options Description View the options for the given Ceph Manager module. Parameters Replace MODULE_NAME with the Ceph Manager module name. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.15. Ceph Monitor The method reference for using the Ceph RESTful API monitor endpoint to display information on the Ceph Monitor. GET /api/monitor Description View Ceph Monitor details. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.16. Ceph OSD The method reference for using the Ceph RESTful API osd endpoint to manage the Ceph OSDs. GET /api/osd Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/osd Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/flags Description View the Ceph OSD flags. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/osd/flags Description Sets the Ceph OSD flags for the entire storage cluster. Parameters The recovery_deletes , sortbitwise , and pglog_hardlimit flags can not be unset. The purged_snapshots flag can not be set. Important You must include these four flags for a successful operation. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/flags/individual Description View the individual Ceph OSD flags. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/osd/flags/individual Description Updates the noout , noin , nodown , and noup flags for an individual subset of Ceph OSDs. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/safe_to_delete Parameters Queries: svc_ids - A required string of the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/safe_to_destroy Description Check to see if the Ceph OSD is safe to destroy. Parameters Queries: ids - A required string of the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/osd/ SVC_ID Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Queries: preserve_id - A string value. force - A string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/ SVC_ID Description Returns collected data about a Ceph OSD. Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/osd/ SVC_ID Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/osd/ SVC_ID /destroy Description Marks Ceph OSD as being destroyed. The Ceph OSD must be marked down before being destroyed. This operation keeps the Ceph OSD identifier intact, but removes the Cephx keys, configuration key data, and lockbox keys. Warning This operation renders the data permanently unreadable. Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/ SVC_ID /devices Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/ SVC_ID /histogram Description Returns the Ceph OSD histogram data. Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/osd/ SVC_ID /mark Description Marks a Ceph OSD out , in , down , and lost . Note A Ceph OSD must be marked down before marking it lost . Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/osd/ SVC_ID /purge Description Removes the Ceph OSD from the CRUSH map. Note The Ceph OSD must be marked down before removal. Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/osd/ SVC_ID /reweight Description Temporarily reweights the Ceph OSD. When a Ceph OSD is marked out , the OSD's weight is set to 0 . When the Ceph OSD is marked back in , the OSD's weight is set to 1 . Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/osd/ SVC_ID /scrub Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Queries: deep - A boolean value, either true or false . Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/ SVC_ID /smart Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.17. Ceph Object Gateway The method reference for using the Ceph RESTful API rgw endpoint to manage the Ceph Object Gateway. GET /api/rgw/status Description Display the Ceph Object Gateway status. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/daemon Description Display the Ceph Object Gateway daemons. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/daemon/ SVC_ID Parameters Replace SVC_ID with the service identifier as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/site Parameters Queries: query - A string value. daemon_name - The name of the daemon as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Bucket Management GET /api/rgw/bucket Parameters Queries: stats - A boolean value for bucket statistics. daemon_name - The name of the daemon as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/rgw/bucket Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/rgw/bucket/ BUCKET Parameters Replace BUCKET with the bucket name as a string value. Queries: purge_objects - A string value. daemon_name - The name of the daemon as a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/bucket/ BUCKET Parameters Replace BUCKET with the bucket name as a string value. Queries: daemon_name - The name of the daemon as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/rgw/bucket/ BUCKET Parameters Replace BUCKET with the bucket name as a string value. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. User Management GET /api/rgw/user Description Display the Ceph Object Gateway users. Parameters Queries: daemon_name - The name of the daemon as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/rgw/user Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/user/get_emails Parameters Queries: daemon_name - The name of the daemon as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/rgw/user/ UID Parameters Replace UID with the user identifier as a string. Queries: daemon_name - The name of the daemon as a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/user/ UID Parameters Replace UID with the user identifier as a string. Queries: daemon_name - The name of the daemon as a string value. stats - A boolean value for user statistics. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/rgw/user/ UID Parameters Replace UID with the user identifier as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/rgw/user/ UID /capability Parameters Replace UID with the user identifier as a string. Queries: daemon_name - The name of the daemon as a string value. type - Required. A string value. perm - Required. A string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/rgw/user/ UID /capability Parameters Replace UID with the user identifier as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/rgw/user/ UID /key Parameters Replace UID with the user identifier as a string. Queries: daemon_name - The name of the daemon as a string value. key_type - A string value. subuser - A string value. access_key - A string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/rgw/user/ UID /key Parameters Replace UID with the user identifier as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/user/ UID /quota Parameters Replace UID with the user identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/rgw/user/ UID /quota Parameters Replace UID with the user identifier as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/rgw/user/ UID /subuser Parameters Replace UID with the user identifier as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/rgw/user/ UID /subuser/ SUBUSER Parameters Replace UID with the user identifier as a string. Replace SUBUSER with the sub user name as a string. Queries: purge_keys - Set to false to not purge the keys. This only works for S3 subusers. daemon_name - The name of the daemon as a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.18. REST APIs for manipulating a role In addition to the radosgw-admin role commands, you can use the REST APIs for manipulating a role. To invoke the REST admin APIs, create a user with admin caps. Example Create a role: Syntax Example Example response Get a role: Syntax Example Example response List a role: Syntax Example request Example response Update the assume role policy document: Syntax Example Update policy attached to a role: Syntax Example List permission policy names attached to a role: Syntax Example Get permission policy attached to a role: Syntax Example Delete policy attached to a role: Syntax Example Delete a role: Note You can delete a role only when it does not have any permission policy attached to it. Syntax Example Additional Resources See the Role management section in the Red Hat Ceph Storage Object Gateway Guide for details. A.19. NFS Ganesha The method reference for using the Ceph RESTful API nfs-ganesha endpoint to manage the Ceph NFS gateway. GET /api/nfs-ganesha/daemon Description View information on the NFS Ganesha daemons. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/nfs-ganesha/export Description View all of the NFS Ganesha exports. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/nfs-ganesha/export Description Creates a new NFS Ganesha export. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/nfs-ganesha/export/ CLUSTER_ID / EXPORT_ID Description Deletes a NFS Ganesha export. Parameters Replace CLUSTER_ID with the storage cluster identifier string. Replace EXPORT_ID with the export identifier as an integer. Queries: reload_daemons - A boolean value that triggers the reloading of the NFS Ganesha daemons configuration. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/nfs-ganesha/export/ CLUSTER_ID / EXPORT_ID Description View NFS Ganesha export information. Parameters Replace CLUSTER_ID with the storage cluster identifier string. Replace EXPORT_ID with the export identifier as an integer. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/nfs-ganesha/export/ CLUSTER_ID / EXPORT_ID Description Update the NFS Ganesha export information. Parameters Replace CLUSTER_ID with the storage cluster identifier string. Replace EXPORT_ID with the export identifier as an integer. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/nfs-ganesha/status Description View the status information for the NFS Ganesha management feature. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. See the Exporting the Namespace to NFS-Ganesha section in the Red Hat Ceph Storage Object Gateway Guide for more information. A.20. Ceph Orchestrator The method reference for using the Ceph RESTful API orchestrator endpoint to display the Ceph Orchestrator status. GET /api/orchestrator/status Description Display the Ceph Orchestrator status. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.21. Pools The method reference for using the Ceph RESTful API pool endpoint to manage the storage pools. GET /api/pool Description Display the pool list. Parameters Queries: attrs - A string value of pool attributes. stats - A boolean value for pool statistics. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/pool Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/pool/ POOL_NAME Parameters Replace POOL_NAME with the name of the pool. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/pool/ POOL_NAME Parameters Replace POOL_NAME with the name of the pool. Queries: attrs - A string value of pool attributes. stats - A boolean value for pool statistics. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/pool/ POOL_NAME Parameters Replace POOL_NAME with the name of the pool. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/pool/ POOL_NAME /configuration Parameters Replace POOL_NAME with the name of the pool. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.22. Prometheus The method reference for using the Ceph RESTful API prometheus endpoint to manage Prometheus. GET /api/prometheus Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/prometheus/rules Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/prometheus/silence Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/prometheus/silence/ S_ID Parameters Replace S_ID with a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/prometheus/silences Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/prometheus/notifications Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.23. RADOS block device The method reference for using the Ceph RESTful API block endpoint to manage RADOS block devices (RBD). This reference includes all available RBD feature endpoints, such as: RBD Namespace RBD Snapshots RBD Trash RBD Mirroring RBD Mirroring Summary RBD Mirroring Pool Bootstrap RBD Mirroring Pool Mode RBD Mirroring Pool Peer RBD Images GET /api/block/image Description View the RBD images. Parameters Queries: pool_name - The pool name as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/block/image/clone_format_version Description Returns the RBD clone format version. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/block/image/default_features Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/block/image/default_features Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/block/image/ IMAGE_SPEC Parameters Replace IMAGE_SPEC with the image name as a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/block/image/ IMAGE_SPEC Parameters Replace IMAGE_SPEC with the image name as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/block/image/ IMAGE_SPEC Parameters Replace IMAGE_SPEC with the image name as a string value. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/ IMAGE_SPEC /copy Parameters Replace IMAGE_SPEC with the image name as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/ IMAGE_SPEC /flatten Parameters Replace IMAGE_SPEC with the image name as a string value. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/ IMAGE_SPEC /move_trash Description Move an image to the trash. Images actively in-use by clones can be moved to the trash, and deleted at a later time. Parameters Replace IMAGE_SPEC with the image name as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Mirroring GET /api/block/mirroring/site_name Description Display the RBD mirroring site name. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/block/mirroring/site_name Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Mirroring Pool Bootstrap POST /api/block/mirroring/pool/ POOL_NAME /bootstrap/peer Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/mirroring/pool/ POOL_NAME /bootstrap/token Parameters Replace POOL_NAME with the name of the pool as a string. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Mirroring Pool Mode GET /api/block/mirroring/pool/ POOL_NAME Description Display the RBD mirroring summary. Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/block/mirroring/pool/ POOL_NAME Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Mirroring Pool Peer GET /api/block/mirroring/pool/ POOL_NAME /peer Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/mirroring/pool/ POOL_NAME /peer Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/block/mirroring/pool/ POOL_NAME /peer/ PEER_UUID Parameters Replace POOL_NAME with the name of the pool as a string. Replace PEER_UUID with the UUID of the peer as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/block/mirroring/pool/ POOL_NAME /peer/ PEER_UUID Parameters Replace POOL_NAME with the name of the pool as a string. Replace PEER_UUID with the UUID of the peer as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/block/mirroring/pool/ POOL_NAME /peer/ PEER_UUID Parameters Replace POOL_NAME with the name of the pool as a string. Replace PEER_UUID with the UUID of the peer as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Mirroring Summary GET /api/block/mirroring/summary Description Display the RBD mirroring summary. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Namespace GET /api/block/pool/ POOL_NAME /namespace Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/pool/ POOL_NAME /namespace Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/block/pool/ POOL_NAME /namespace/ NAMESPACE Parameters Replace POOL_NAME with the name of the pool as a string. Replace NAMESPACE with the namespace as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Snapshots POST /api/block/image/ IMAGE_SPEC /snap Parameters Replace IMAGE_SPEC with the image name as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME Parameters Replace IMAGE_SPEC with the image name as a string value. Replace SNAPSHOT_NAME with the name of the snapshot as a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME Parameters Replace IMAGE_SPEC with the image name as a string value. Replace SNAPSHOT_NAME with the name of the snapshot as a string value. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME /clone Description Clones a snapshot to an image. Parameters Replace IMAGE_SPEC with the image name as a string value. Replace SNAPSHOT_NAME with the name of the snapshot as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME /rollback Parameters Replace IMAGE_SPEC with the image name as a string value. Replace SNAPSHOT_NAME with the name of the snapshot as a string value. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Trash GET /api/block/image/trash Description Display all the RBD trash entries, or the RBD trash details by pool name. Parameters Queries: pool_name - The name of the pool as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/trash/purge Description Remove all the expired images from trash. Parameters Queries: pool_name - The name of the pool as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/block/image/trash/ IMAGE_ID_SPEC Description Deletes an image from the trash. If the image deferment time has not expired, you can not delete it unless you use force . An actively in-use image by clones or has snapshots, it can not be deleted. Parameters Replace IMAGE_ID_SPEC with the image name as a string value. Queries: force - A boolean value to force the deletion of an image from trash. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/trash/ IMAGE_ID_SPEC /restore Description Restores an image from the trash. Parameters Replace IMAGE_ID_SPEC with the image name as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.24. Performance counters The method reference for using the Ceph RESTful API perf_counters endpoint to display the various Ceph performance counter. This reference includes all available performance counter endpoints, such as: Ceph Metadata Server (MDS) Ceph Manager Ceph Monitor Ceph OSD Ceph Object Gateway Ceph RADOS Block Device (RBD) Mirroring TCMU Runner GET /api/perf_counters Description Displays the performance counters. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph Metadata Server GET /api/perf_counters/mds/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph Manager GET /api/perf_counters/mgr/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph Monitor GET /api/perf_counters/mon/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph OSD GET /api/perf_counters/osd/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph RADOS Block Device (RBD) Mirroring GET /api/perf_counters/rbd-mirror/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph Object Gateway GET /api/perf_counters/rgw/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. TCMU Runner GET /api/perf_counters/tcmu-runner/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.25. Roles The method reference for using the Ceph RESTful API role endpoint to manage the various user roles in Ceph. GET /api/role Description Display the role list. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/role Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/role/ NAME Parameters Replace NAME with the role name as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/role/ NAME Parameters Replace NAME with the role name as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/role/ NAME Parameters Replace NAME with the role name as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/role/ NAME /clone Parameters Replace NAME with the role name as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.26. Services The method reference for using the Ceph RESTful API service endpoint to manage the various Ceph services. GET /api/service Parameters Queries: service_name - The name of the service as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/service Parameters service_spec - The service specification as a JSON file. service_name - The name of the service. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/service/known_types Description Display a list of known service types. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/service/ SERVICE_NAME Parameters Replace SERVICE_NAME with the name of the service as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/service/ SERVICE_NAME Parameters Replace SERVICE_NAME with the name of the service as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/service/ SERVICE_NAME /daemons Parameters Replace SERVICE_NAME with the name of the service as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.27. Settings The method reference for using the Ceph RESTful API settings endpoint to manage the various Ceph settings. GET /api/settings Description Display the list of available options Parameters Queries: names - A comma-separated list of option names. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/settings Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/settings/ NAME Parameters Replace NAME with the option name as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/settings/ NAME Description Display the given option. Parameters Replace NAME with the option name as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/settings/ NAME Parameters Replace NAME with the option name as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.28. Ceph task The method reference for using the Ceph RESTful API task endpoint to display Ceph tasks. GET /api/task Description Display Ceph tasks. Parameters Queries: name - The name of the task. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.29. Telemetry The method reference for using the Ceph RESTful API telemetry endpoint to manage data for the telemetry Ceph Manager module. PUT /api/telemetry Description Enables or disables the sending of collected data by the telemetry module. Parameters enable - A boolean value. license_name - A string value, such as, sharing-1-0 . Make sure the user is aware of and accepts the license for sharing telemetry data. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/telemetry/report Description Display report data on Ceph and devices. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. See the Activating and deactivating telemetry chapter in the Red Hat Ceph Storage Dashboard Guide for details about managing with the Ceph dashboard. A.30. Ceph users The method reference for using the Ceph RESTful API user endpoint to display Ceph user details and to manage Ceph user passwords. GET /api/user Description Display a list of users. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/user Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/user/ USER_NAME Parameters Replace USER_NAME with the name of the user as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/user/ USER_NAME Parameters Replace USER_NAME with the name of the user as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/user/ USER_NAME Parameters Replace USER_NAME with the name of the user as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/user/ USER_NAME /change_password Parameters Replace USER_NAME with the name of the user as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/user/validate_password Description Checks the password to see if it meets the password policy. Parameters password - The password to validate. username - Optional. The name of the user. old_password - Optional. The old password. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. | [
"GET /api/summary HTTP/1.1 Host: example.com",
"curl -i -k --location -X POST 'https://192.168.0.44:8443/api/auth' -H 'Accept: application/vnd.ceph.api.v1.0+json' -H 'Content-Type: application/json' --data '{\"password\": \"admin@123\", \"username\": \"admin\"}'",
"POST /api/auth HTTP/1.1 Host: example.com Content-Type: application/json { \"password\": \" STRING \", \"username\": \" STRING \" }",
"POST /api/auth/check?token= STRING HTTP/1.1 Host: example.com Content-Type: application/json { \"token\": \" STRING \" }",
"GET /api/cephfs HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID /clients HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID /get_root_directory HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID /ls_dir HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID /mds_counters HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID /quota?path= STRING HTTP/1.1 Host: example.com",
"PUT /api/cephfs/ FS_ID /quota HTTP/1.1 Host: example.com Content-Type: application/json { \"max_bytes\": \" STRING \", \"max_files\": \" STRING \", \"path\": \" STRING \" }",
"POST /api/cephfs/ FS_ID /snapshot HTTP/1.1 Host: example.com Content-Type: application/json { \"name\": \" STRING \", \"path\": \" STRING \" }",
"POST /api/cephfs/ FS_ID /tree HTTP/1.1 Host: example.com Content-Type: application/json { \"path\": \" STRING \" }",
"GET /api/cluster_conf HTTP/1.1 Host: example.com",
"POST /api/cluster_conf HTTP/1.1 Host: example.com Content-Type: application/json { \"name\": \" STRING \", \"value\": \" STRING \" }",
"PUT /api/cluster_conf HTTP/1.1 Host: example.com Content-Type: application/json { \"options\": \" STRING \" }",
"GET /api/cluster_conf/filter HTTP/1.1 Host: example.com",
"GET /api/cluster_conf/ NAME HTTP/1.1 Host: example.com",
"GET /api/crush_rule HTTP/1.1 Host: example.com",
"POST /api/crush_rule HTTP/1.1 Host: example.com Content-Type: application/json { \"device_class\": \" STRING \", \"failure_domain\": \" STRING \", \"name\": \" STRING \", \"root\": \" STRING \" }",
"GET /api/crush_rule/ NAME HTTP/1.1 Host: example.com",
"GET /api/erasure_code_profile HTTP/1.1 Host: example.com",
"POST /api/erasure_code_profile HTTP/1.1 Host: example.com Content-Type: application/json { \"name\": \" STRING \" }",
"GET /api/erasure_code_profile/ NAME HTTP/1.1 Host: example.com",
"GET /api/feature_toggles HTTP/1.1 Host: example.com",
"GET /api/grafana/url HTTP/1.1 Host: example.com",
"GET /api/grafana/validation/ PARAMS HTTP/1.1 Host: example.com",
"GET /api/health/full HTTP/1.1 Host: example.com",
"GET /api/health/minimal HTTP/1.1 Host: example.com",
"GET /api/host HTTP/1.1 Host: example.com",
"POST /api/host HTTP/1.1 Host: example.com Content-Type: application/json { \"hostname\": \" STRING \", \"status\": \" STRING \" }",
"GET /api/host/ HOST_NAME HTTP/1.1 Host: example.com",
"PUT /api/host/ HOST_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"force\": true, \"labels\": [ \" STRING \" ], \"maintenance\": true, \"update_labels\": true }",
"GET /api/host/ HOST_NAME /daemons HTTP/1.1 Host: example.com",
"GET /api/host/ HOST_NAME /devices HTTP/1.1 Host: example.com",
"POST /api/host/ HOST_NAME /identify_device HTTP/1.1 Host: example.com Content-Type: application/json { \"device\": \" STRING \", \"duration\": \" STRING \" }",
"GET /api/host/ HOST_NAME /inventory HTTP/1.1 Host: example.com",
"GET /api/host/ HOST_NAME /smart HTTP/1.1 Host: example.com",
"GET /api/iscsi/discoveryauth HTTP/1.1 Host: example.com",
"PUT /api/iscsi/discoveryauth?user= STRING &password= STRING &mutual_user= STRING &mutual_password= STRING HTTP/1.1 Host: example.com Content-Type: application/json { \"mutual_password\": \" STRING \", \"mutual_user\": \" STRING \", \"password\": \" STRING \", \"user\": \" STRING \" }",
"GET /api/iscsi/target HTTP/1.1 Host: example.com",
"POST /api/iscsi/target HTTP/1.1 Host: example.com Content-Type: application/json { \"acl_enabled\": \" STRING \", \"auth\": \" STRING \", \"clients\": \" STRING \", \"disks\": \" STRING \", \"groups\": \" STRING \", \"portals\": \" STRING \", \"target_controls\": \" STRING \", \"target_iqn\": \" STRING \" }",
"GET /api/iscsi/target/ TARGET_IQN HTTP/1.1 Host: example.com",
"PUT /api/iscsi/target/ TARGET_IQN HTTP/1.1 Host: example.com Content-Type: application/json { \"acl_enabled\": \" STRING \", \"auth\": \" STRING \", \"clients\": \" STRING \", \"disks\": \" STRING \", \"groups\": \" STRING \", \"new_target_iqn\": \" STRING \", \"portals\": \" STRING \", \"target_controls\": \" STRING \" }",
"GET /api/logs/all HTTP/1.1 Host: example.com",
"GET /api/mgr/module HTTP/1.1 Host: example.com",
"GET /api/mgr/module/ MODULE_NAME HTTP/1.1 Host: example.com",
"PUT /api/mgr/module/ MODULE_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"config\": \" STRING \" }",
"GET /api/mgr/module/ MODULE_NAME /options HTTP/1.1 Host: example.com",
"GET /api/monitor HTTP/1.1 Host: example.com",
"GET /api/osd HTTP/1.1 Host: example.com",
"POST /api/osd HTTP/1.1 Host: example.com Content-Type: application/json { \"data\": \" STRING \", \"method\": \" STRING \", \"tracking_id\": \" STRING \" }",
"GET /api/osd/flags HTTP/1.1 Host: example.com",
"PUT /api/osd/flags HTTP/1.1 Host: example.com Content-Type: application/json { \"flags\": [ \" STRING \" ] }",
"GET /api/osd/flags/individual HTTP/1.1 Host: example.com",
"PUT /api/osd/flags/individual HTTP/1.1 Host: example.com Content-Type: application/json { \"flags\": { \"nodown\": true, \"noin\": true, \"noout\": true, \"noup\": true }, \"ids\": [ 1 ] }",
"GET /api/osd/safe_to_delete?svc_ids= STRING HTTP/1.1 Host: example.com",
"GET /api/osd/safe_to_destroy?ids= STRING HTTP/1.1 Host: example.com",
"GET /api/osd/ SVC_ID HTTP/1.1 Host: example.com",
"PUT /api/osd/ SVC_ID HTTP/1.1 Host: example.com Content-Type: application/json { \"device_class\": \" STRING \" }",
"GET /api/osd/ SVC_ID /devices HTTP/1.1 Host: example.com",
"GET /api/osd/ SVC_ID /histogram HTTP/1.1 Host: example.com",
"PUT /api/osd/ SVC_ID /mark HTTP/1.1 Host: example.com Content-Type: application/json { \"action\": \" STRING \" }",
"POST /api/osd/ SVC_ID /reweight HTTP/1.1 Host: example.com Content-Type: application/json { \"weight\": \" STRING \" }",
"POST /api/osd/ SVC_ID /scrub HTTP/1.1 Host: example.com Content-Type: application/json { \"deep\": true }",
"GET /api/osd/ SVC_ID /smart HTTP/1.1 Host: example.com",
"GET /api/rgw/status HTTP/1.1 Host: example.com",
"GET /api/rgw/daemon HTTP/1.1 Host: example.com",
"GET /api/rgw/daemon/ SVC_ID HTTP/1.1 Host: example.com",
"GET /api/rgw/site HTTP/1.1 Host: example.com",
"GET /api/rgw/bucket HTTP/1.1 Host: example.com",
"POST /api/rgw/bucket HTTP/1.1 Host: example.com Content-Type: application/json { \"bucket\": \" STRING \", \"daemon_name\": \" STRING \", \"lock_enabled\": \"false\", \"lock_mode\": \" STRING \", \"lock_retention_period_days\": \" STRING \", \"lock_retention_period_years\": \" STRING \", \"placement_target\": \" STRING \", \"uid\": \" STRING \", \"zonegroup\": \" STRING \" }",
"GET /api/rgw/bucket/ BUCKET HTTP/1.1 Host: example.com",
"PUT /api/rgw/bucket/ BUCKET HTTP/1.1 Host: example.com Content-Type: application/json { \"bucket_id\": \" STRING \", \"daemon_name\": \" STRING \", \"lock_mode\": \" STRING \", \"lock_retention_period_days\": \" STRING \", \"lock_retention_period_years\": \" STRING \", \"mfa_delete\": \" STRING \", \"mfa_token_pin\": \" STRING \", \"mfa_token_serial\": \" STRING \", \"uid\": \" STRING \", \"versioning_state\": \" STRING \" }",
"GET /api/rgw/user HTTP/1.1 Host: example.com",
"POST /api/rgw/user HTTP/1.1 Host: example.com Content-Type: application/json { \"access_key\": \" STRING \", \"daemon_name\": \" STRING \", \"display_name\": \" STRING \", \"email\": \" STRING \", \"generate_key\": \" STRING \", \"max_buckets\": \" STRING \", \"secret_key\": \" STRING \", \"suspended\": \" STRING \", \"uid\": \" STRING \" }",
"GET /api/rgw/user/get_emails HTTP/1.1 Host: example.com",
"GET /api/rgw/user/ UID HTTP/1.1 Host: example.com",
"PUT /api/rgw/user/ UID HTTP/1.1 Host: example.com Content-Type: application/json { \"daemon_name\": \" STRING \", \"display_name\": \" STRING \", \"email\": \" STRING \", \"max_buckets\": \" STRING \", \"suspended\": \" STRING \" }",
"POST /api/rgw/user/ UID /capability HTTP/1.1 Host: example.com Content-Type: application/json { \"daemon_name\": \" STRING \", \"perm\": \" STRING \", \"type\": \" STRING \" }",
"POST /api/rgw/user/ UID /key HTTP/1.1 Host: example.com Content-Type: application/json { \"access_key\": \" STRING \", \"daemon_name\": \" STRING \", \"generate_key\": \"true\", \"key_type\": \"s3\", \"secret_key\": \" STRING \", \"subuser\": \" STRING \" }",
"GET /api/rgw/user/ UID /quota HTTP/1.1 Host: example.com",
"PUT /api/rgw/user/ UID /quota HTTP/1.1 Host: example.com Content-Type: application/json { \"daemon_name\": \" STRING \", \"enabled\": \" STRING \", \"max_objects\": \" STRING \", \"max_size_kb\": 1, \"quota_type\": \" STRING \" }",
"POST /api/rgw/user/ UID /subuser HTTP/1.1 Host: example.com Content-Type: application/json { \"access\": \" STRING \", \"access_key\": \" STRING \", \"daemon_name\": \" STRING \", \"generate_secret\": \"true\", \"key_type\": \"s3\", \"secret_key\": \" STRING \", \"subuser\": \" STRING \" }",
"radosgw-admin --uid TESTER --display-name \"TestUser\" --access_key TESTER --secret test123 user create radosgw-admin caps add --uid=\"TESTER\" --caps=\"roles=*\"",
"POST \"<hostname>?Action=CreateRole&RoleName= ROLE_NAME &Path= PATH_TO_FILE &AssumeRolePolicyDocument= TRUST_RELATIONSHIP_POLICY_DOCUMENT \"",
"POST \"<hostname>?Action=CreateRole&RoleName=S3Access&Path=/application_abc/component_xyz/&AssumeRolePolicyDocument={\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}\"",
"<role> <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id> <name>S3Access</name> <path>/application_abc/component_xyz/</path> <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn> <create_date>2022-06-23T07:43:42.811Z</create_date> <max_session_duration>3600</max_session_duration> <assume_role_policy_document>{\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}</assume_role_policy_document> </role>",
"POST \"<hostname>?Action=GetRole&RoleName= ROLE_NAME \"",
"POST \"<hostname>?Action=GetRole&RoleName=S3Access\"",
"<role> <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id> <name>S3Access</name> <path>/application_abc/component_xyz/</path> <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn> <create_date>2022-06-23T07:43:42.811Z</create_date> <max_session_duration>3600</max_session_duration> <assume_role_policy_document>{\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}</assume_role_policy_document> </role>",
"POST \"<hostname>?Action=GetRole&RoleName= ROLE_NAME &PathPrefix= PATH_PREFIX \"",
"POST \"<hostname>?Action=ListRoles&RoleName=S3Access&PathPrefix=/application\"",
"<role> <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id> <name>S3Access</name> <path>/application_abc/component_xyz/</path> <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn> <create_date>2022-06-23T07:43:42.811Z</create_date> <max_session_duration>3600</max_session_duration> <assume_role_policy_document>{\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}</assume_role_policy_document> </role>",
"POST \"<hostname>?Action=UpdateAssumeRolePolicy&RoleName= ROLE_NAME &PolicyDocument= TRUST_RELATIONSHIP_POLICY_DOCUMENT \"",
"POST \"<hostname>?Action=UpdateAssumeRolePolicy&RoleName=S3Access&PolicyDocument={\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER2\"]},\"Action\":[\"sts:AssumeRole\"]}]}\"",
"POST \"<hostname>?Action=PutRolePolicy&RoleName= ROLE_NAME &PolicyName= POLICY_NAME &PolicyDocument= TRUST_RELATIONSHIP_POLICY_DOCUMENT \"",
"POST \"<hostname>?Action=PutRolePolicy&RoleName=S3Access&PolicyName=Policy1&PolicyDocument={\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:CreateBucket\"],\"Resource\":\"arn:aws:s3:::example_bucket\"}]}\"",
"POST \"<hostname>?Action=ListRolePolicies&RoleName= ROLE_NAME \"",
"POST \"<hostname>?Action=ListRolePolicies&RoleName=S3Access\" <PolicyNames> <member>Policy1</member> </PolicyNames>",
"POST \"<hostname>?Action=GetRolePolicy&RoleName= ROLE_NAME &PolicyName= POLICY_NAME \"",
"POST \"<hostname>?Action=GetRolePolicy&RoleName=S3Access&PolicyName=Policy1\" <GetRolePolicyResult> <PolicyName>Policy1</PolicyName> <RoleName>S3Access</RoleName> <Permission_policy>{\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:CreateBucket\"],\"Resource\":\"arn:aws:s3:::example_bucket\"}]}</Permission_policy> </GetRolePolicyResult>",
"POST \"hostname>?Action=DeleteRolePolicy&RoleName= ROLE_NAME &PolicyName= POLICY_NAME \"",
"POST \"<hostname>?Action=DeleteRolePolicy&RoleName=S3Access&PolicyName=Policy1\"",
"POST \"<hostname>?Action=DeleteRole&RoleName= ROLE_NAME \"",
"POST \"<hostname>?Action=DeleteRole&RoleName=S3Access\"",
"GET /api/nfs-ganesha/daemon HTTP/1.1 Host: example.com",
"GET /api/nfs-ganesha/export HTTP/1.1 Host: example.com",
"POST /api/nfs-ganesha/export HTTP/1.1 Host: example.com Content-Type: application/json { \"access_type\": \" STRING \", \"clients\": [ { \"access_type\": \" STRING \", \"addresses\": [ \" STRING \" ], \"squash\": \" STRING \" } ], \"cluster_id\": \" STRING \", \"daemons\": [ \" STRING \" ], \"fsal\": { \"filesystem\": \" STRING \", \"name\": \" STRING \", \"rgw_user_id\": \" STRING \", \"sec_label_xattr\": \" STRING \", \"user_id\": \" STRING \" }, \"path\": \" STRING \", \"protocols\": [ 1 ], \"pseudo\": \" STRING \", \"reload_daemons\": true, \"security_label\": \" STRING \", \"squash\": \" STRING \", \"tag\": \" STRING \", \"transports\": [ \" STRING \" ] }",
"GET /api/nfs-ganesha/export/ CLUSTER_ID / EXPORT_ID HTTP/1.1 Host: example.com",
"PUT /api/nfs-ganesha/export/ CLUSTER_ID / EXPORT_ID HTTP/1.1 Host: example.com Content-Type: application/json { \"access_type\": \" STRING \", \"clients\": [ { \"access_type\": \" STRING \", \"addresses\": [ \" STRING \" ], \"squash\": \" STRING \" } ], \"daemons\": [ \" STRING \" ], \"fsal\": { \"filesystem\": \" STRING \", \"name\": \" STRING \", \"rgw_user_id\": \" STRING \", \"sec_label_xattr\": \" STRING \", \"user_id\": \" STRING \" }, \"path\": \" STRING \", \"protocols\": [ 1 ], \"pseudo\": \" STRING \", \"reload_daemons\": true, \"security_label\": \" STRING \", \"squash\": \" STRING \", \"tag\": \" STRING \", \"transports\": [ \" STRING \" ] }",
"GET /api/nfs-ganesha/status HTTP/1.1 Host: example.com",
"GET /api/orchestrator/status HTTP/1.1 Host: example.com",
"GET /api/pool HTTP/1.1 Host: example.com",
"POST /api/pool HTTP/1.1 Host: example.com Content-Type: application/json { \"application_metadata\": \" STRING \", \"configuration\": \" STRING \", \"erasure_code_profile\": \" STRING \", \"flags\": \" STRING \", \"pg_num\": 1, \"pool\": \" STRING \", \"pool_type\": \" STRING \", \"rule_name\": \" STRING \" }",
"GET /api/pool/ POOL_NAME HTTP/1.1 Host: example.com",
"PUT /api/pool/ POOL_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"application_metadata\": \" STRING \", \"configuration\": \" STRING \", \"flags\": \" STRING \" }",
"GET /api/pool/ POOL_NAME /configuration HTTP/1.1 Host: example.com",
"GET /api/prometheus/rules HTTP/1.1 Host: example.com",
"GET /api/prometheus/rules HTTP/1.1 Host: example.com",
"GET /api/prometheus/silences HTTP/1.1 Host: example.com",
"GET /api/prometheus/notifications HTTP/1.1 Host: example.com",
"GET /api/block/image HTTP/1.1 Host: example.com",
"POST /api/block/image HTTP/1.1 Host: example.com Content-Type: application/json { \"configuration\": \" STRING \", \"data_pool\": \" STRING \", \"features\": \" STRING \", \"name\": \" STRING \", \"namespace\": \" STRING \", \"obj_size\": 1, \"pool_name\": \" STRING \", \"size\": 1, \"stripe_count\": 1, \"stripe_unit\": \" STRING \" }",
"GET /api/block/image/clone_format_version HTTP/1.1 Host: example.com",
"GET /api/block/image/default_features HTTP/1.1 Host: example.com",
"GET /api/block/image/default_features HTTP/1.1 Host: example.com",
"GET /api/block/image/ IMAGE_SPEC HTTP/1.1 Host: example.com",
"PUT /api/block/image/ IMAGE_SPEC HTTP/1.1 Host: example.com Content-Type: application/json { \"configuration\": \" STRING \", \"features\": \" STRING \", \"name\": \" STRING \", \"size\": 1 }",
"POST /api/block/image/ IMAGE_SPEC /copy HTTP/1.1 Host: example.com Content-Type: application/json { \"configuration\": \" STRING \", \"data_pool\": \" STRING \", \"dest_image_name\": \" STRING \", \"dest_namespace\": \" STRING \", \"dest_pool_name\": \" STRING \", \"features\": \" STRING \", \"obj_size\": 1, \"snapshot_name\": \" STRING \", \"stripe_count\": 1, \"stripe_unit\": \" STRING \" }",
"POST /api/block/image/ IMAGE_SPEC /move_trash HTTP/1.1 Host: example.com Content-Type: application/json { \"delay\": 1 }",
"GET /api/block/mirroring/site_name HTTP/1.1 Host: example.com",
"PUT /api/block/mirroring/site_name HTTP/1.1 Host: example.com Content-Type: application/json { \"site_name\": \" STRING \" }",
"POST /api/block/mirroring/pool/ POOL_NAME /bootstrap/peer HTTP/1.1 Host: example.com Content-Type: application/json { \"direction\": \" STRING \", \"token\": \" STRING \" }",
"GET /api/block/mirroring/pool/ POOL_NAME HTTP/1.1 Host: example.com",
"PUT /api/block/mirroring/pool/ POOL_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"mirror_mode\": \" STRING \" }",
"GET /api/block/mirroring/pool/ POOL_NAME /peer HTTP/1.1 Host: example.com",
"POST /api/block/mirroring/pool/ POOL_NAME /peer HTTP/1.1 Host: example.com Content-Type: application/json { \"client_id\": \" STRING \", \"cluster_name\": \" STRING \", \"key\": \" STRING \", \"mon_host\": \" STRING \" }",
"GET /api/block/mirroring/pool/ POOL_NAME /peer/ PEER_UUID HTTP/1.1 Host: example.com",
"PUT /api/block/mirroring/pool/ POOL_NAME /peer/ PEER_UUID HTTP/1.1 Host: example.com Content-Type: application/json { \"client_id\": \" STRING \", \"cluster_name\": \" STRING \", \"key\": \" STRING \", \"mon_host\": \" STRING \" }",
"GET /api/block/mirroring/summary HTTP/1.1 Host: example.com",
"GET /api/block/pool/ POOL_NAME /namespace HTTP/1.1 Host: example.com",
"POST /api/block/pool/ POOL_NAME /namespace HTTP/1.1 Host: example.com Content-Type: application/json { \"namespace\": \" STRING \" }",
"POST /api/block/image/ IMAGE_SPEC /snap HTTP/1.1 Host: example.com Content-Type: application/json { \"snapshot_name\": \" STRING \" }",
"PUT /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"is_protected\": true, \"new_snap_name\": \" STRING \" }",
"POST /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME /clone HTTP/1.1 Host: example.com Content-Type: application/json { \"child_image_name\": \" STRING \", \"child_namespace\": \" STRING \", \"child_pool_name\": \" STRING \", \"configuration\": \" STRING \", \"data_pool\": \" STRING \", \"features\": \" STRING \", \"obj_size\": 1, \"stripe_count\": 1, \"stripe_unit\": \" STRING \" }",
"GET /api/block/image/trash HTTP/1.1 Host: example.com",
"POST /api/block/image/trash/purge HTTP/1.1 Host: example.com Content-Type: application/json { \"pool_name\": \" STRING \" }",
"POST /api/block/image/trash/ IMAGE_ID_SPEC /restore HTTP/1.1 Host: example.com Content-Type: application/json { \"new_image_name\": \" STRING \" }",
"GET /api/perf_counters HTTP/1.1 Host: example.com",
"GET /api/perf_counters/mds/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/mgr/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/mon/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/osd/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/rbd-mirror/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/rgw/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/tcmu-runner/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/role HTTP/1.1 Host: example.com",
"POST /api/role HTTP/1.1 Host: example.com Content-Type: application/json { \"description\": \" STRING \", \"name\": \" STRING \", \"scopes_permissions\": \" STRING \" }",
"GET /api/role/ NAME HTTP/1.1 Host: example.com",
"PUT /api/role/ NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"description\": \" STRING \", \"scopes_permissions\": \" STRING \" }",
"POST /api/role/ NAME /clone HTTP/1.1 Host: example.com Content-Type: application/json { \"new_name\": \" STRING \" }",
"GET /api/service HTTP/1.1 Host: example.com",
"POST /api/service HTTP/1.1 Host: example.com Content-Type: application/json { \"service_name\": \" STRING \", \"service_spec\": \" STRING \" }",
"GET /api/service/known_types HTTP/1.1 Host: example.com",
"GET /api/service/ SERVICE_NAME HTTP/1.1 Host: example.com",
"GET /api/service/ SERVICE_NAME /daemons HTTP/1.1 Host: example.com",
"GET /api/settings HTTP/1.1 Host: example.com",
"GET /api/settings/ NAME HTTP/1.1 Host: example.com",
"PUT /api/settings/ NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"value\": \" STRING \" }",
"GET /api/task HTTP/1.1 Host: example.com",
"PUT /api/telemetry HTTP/1.1 Host: example.com Content-Type: application/json { \"enable\": true, \"license_name\": \" STRING \" }",
"GET /api/telemetry/report HTTP/1.1 Host: example.com",
"GET /api/user HTTP/1.1 Host: example.com",
"POST /api/user HTTP/1.1 Host: example.com Content-Type: application/json { \"email\": \" STRING \", \"enabled\": true, \"name\": \" STRING \", \"password\": \" STRING \", \"pwdExpirationDate\": \" STRING \", \"pwdUpdateRequired\": true, \"roles\": \" STRING \", \"username\": \" STRING \" }",
"GET /api/user/ USER_NAME HTTP/1.1 Host: example.com",
"PUT /api/user/ USER_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"email\": \" STRING \", \"enabled\": \" STRING \", \"name\": \" STRING \", \"password\": \" STRING \", \"pwdExpirationDate\": \" STRING \", \"pwdUpdateRequired\": true, \"roles\": \" STRING \" }",
"POST /api/user/ USER_NAME /change_password HTTP/1.1 Host: example.com Content-Type: application/json { \"new_password\": \" STRING \", \"old_password\": \" STRING \" }",
"POST /api/user/validate_password HTTP/1.1 Host: example.com Content-Type: application/json { \"old_password\": \" STRING \", \"password\": \" STRING \", \"username\": \" STRING \" }"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/developer_guide/the-ceph-restful-api-specifications |
Chapter 20. Glossary | Chapter 20. Glossary This glossary defines common terms that are used in the logging documentation. Annotation You can use annotations to attach metadata to objects. Red Hat OpenShift Logging Operator The Red Hat OpenShift Logging Operator provides a set of APIs to control the collection and forwarding of application, infrastructure, and audit logs. Custom resource (CR) A CR is an extension of the Kubernetes API. To configure the logging and log forwarding, you can customize the ClusterLogging and the ClusterLogForwarder custom resources. Event router The event router is a pod that watches OpenShift Container Platform events. It collects logs by using the logging. Fluentd Fluentd is a log collector that resides on each OpenShift Container Platform node. It gathers application, infrastructure, and audit logs and forwards them to different outputs. Garbage collection Garbage collection is the process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods. Elasticsearch Elasticsearch is a distributed search and analytics engine. OpenShift Container Platform uses Elasticsearch as a default log store for the logging. OpenShift Elasticsearch Operator The OpenShift Elasticsearch Operator is used to run an Elasticsearch cluster on OpenShift Container Platform. The OpenShift Elasticsearch Operator provides self-service for the Elasticsearch cluster operations and is used by the logging. Indexing Indexing is a data structure technique that is used to quickly locate and access data. Indexing optimizes the performance by minimizing the amount of disk access required when a query is processed. JSON logging The Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either the logging managed Elasticsearch or any other third-party system supported by the Log Forwarding API. Kibana Kibana is a browser-based console interface to query, discover, and visualize your Elasticsearch data through histograms, line graphs, and pie charts. Kubernetes API server Kubernetes API server validates and configures data for the API objects. Labels Labels are key-value pairs that you can use to organize and select subsets of objects, such as a pod. Logging With the logging, you can aggregate application, infrastructure, and audit logs throughout your cluster. You can also store them to a default log store, forward them to third party systems, and query and visualize the stored logs in the default log store. Logging collector A logging collector collects logs from the cluster, formats them, and forwards them to the log store or third party systems. Log store A log store is used to store aggregated logs. You can use an internal log store or forward logs to external log stores. Log visualizer Log visualizer is the user interface (UI) component you can use to view information such as logs, graphs, charts, and other metrics. Node A node is a worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. Operators Operators are the preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. Pod A pod is the smallest logical unit in Kubernetes. A pod consists of one or more containers and runs on a worker node. Role-based access control (RBAC) RBAC is a key security control to ensure that cluster users and workloads have access only to resources required to execute their roles. Shards Elasticsearch organizes log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards. Taint Taints ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node. Toleration You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints. Web console A user interface (UI) to manage OpenShift Container Platform. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/logging/openshift-logging-common-terms |
Chapter 1. Logging in to Identity Management from the command line | Chapter 1. Logging in to Identity Management from the command line Identity Management (IdM) uses the Kerberos protocol to support single sign-on. Single sign-on means that the user enters the correct user name and password only once, and then accesses IdM services without the system prompting for the credentials again. Important In IdM, the System Security Services Daemon (SSSD) automatically obtains a ticket-granting ticket (TGT) for a user after the user successfully logs in to the desktop environment on an IdM client machine with the corresponding Kerberos principal name. This means that after logging in, the user is not required to use the kinit utility to access IdM resources. If you have cleared your Kerberos credential cache or your Kerberos TGT has expired, you need to request a Kerberos ticket manually to access IdM resources. The following sections present basic user operations when using Kerberos in IdM. 1.1. Using kinit to log in to IdM manually Follow this procedure to use the kinit utility to authenticate to an Identity Management (IdM) environment manually. The kinit utility obtains and caches a Kerberos ticket-granting ticket (TGT) on behalf of an IdM user. Note Only use this procedure if you have destroyed your initial Kerberos TGT or if it has expired. As an IdM user, when logging onto your local machine you are also automatically logging in to IdM. This means that after logging in, you are not required to use the kinit utility to access IdM resources. Procedure To log in to IdM Under the user name of the user who is currently logged in on the local system, use kinit without specifying a user name. For example, if you are logged in as example_user on the local system: If the user name of the local user does not match any user entry in IdM, the authentication attempt fails: Using a Kerberos principal that does not correspond to your local user name, pass the required user name to the kinit utility. For example, to log in as the admin user: Note Requesting user tickets using kinit -kt KDB: [email protected] is disabled. For more information, see the Why kinit -kt KDB: [email protected] no longer work after CVE-2024-3183 solution. Verification To verify that the login was successful, use the klist utility to display the cached TGT. In the following example, the cache contains a ticket for the example_user principal, which means that on this particular host, only example_user is currently allowed to access IdM services: 1.2. Destroying a user's active Kerberos ticket Follow this procedure to clear the credentials cache that contains the user's active Kerberos ticket. Procedure To destroy your Kerberos ticket: Verificiation To check that the Kerberos ticket has been destroyed: 1.3. Configuring an external system for Kerberos authentication Follow this procedure to configure an external system so that Identity Management (IdM) users can log in to IdM from the external system using their Kerberos credentials. Enabling Kerberos authentication on external systems is especially useful when your infrastructure includes multiple realms or overlapping domains. It is also useful if the system has not been enrolled into any IdM domain through ipa-client-install . To enable Kerberos authentication to IdM from a system that is not a member of the IdM domain, define an IdM-specific Kerberos configuration file on the external system. Prerequisites The krb5-workstation package is installed on the external system. To find out whether the package is installed, use the following CLI command: Procedure Copy the /etc/krb5.conf file from the IdM server to the external system. For example: Warning Do not overwrite the existing krb5.conf file on the external system. On the external system, set the terminal session to use the copied IdM Kerberos configuration file: The KRB5_CONFIG variable exists only temporarily until you log out. To prevent this loss, export the variable with a different file name. Copy the Kerberos configuration snippets from the /etc/krb5.conf.d/ directory to the external system. Users on the external system can now use the kinit utility to authenticate against the IdM server. 1.4. Additional resources krb5.conf(5) , kinit(1) , klist(1) , and kdestroy(1) man pages on your system | [
"[example_user@server ~]USD kinit Password for [email protected]: [example_user@server ~]USD",
"[example_user@server ~]USD kinit kinit: Client '[email protected]' not found in Kerberos database while getting initial credentials",
"[example_user@server ~]USD kinit admin Password for [email protected]: [example_user@server ~]USD",
"klist Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 11/10/2019 08:35:45 11/10/2019 18:35:45 krbtgt/[email protected]",
"[example_user@server ~]USD kdestroy",
"[example_user@server ~]USD klist klist: Credentials cache keyring 'persistent:0:0' not found",
"dnf list installed krb5-workstation Installed Packages krb5-workstation.x86_64 1.16.1-19.el8 @BaseOS",
"scp /etc/krb5.conf root@ externalsystem.example.com :/etc/krb5_ipa.conf",
"export KRB5_CONFIG=/etc/krb5_ipa.conf"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/accessing_identity_management_services/logging-in-to-ipa-from-the-command-line_accessing-idm-services |
Deploy Red Hat Quay - High Availability | Deploy Red Hat Quay - High Availability Red Hat Quay 3 Deploy Red Hat Quay HA Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploy_red_hat_quay_-_high_availability/index |
Chapter 12. Optimizing routing | Chapter 12. Optimizing routing The OpenShift Container Platform HAProxy router scales to optimize performance. 12.1. Baseline Ingress Controller (router) performance The OpenShift Container Platform Ingress Controller, or router, is the Ingress point for all external traffic destined for OpenShift Container Platform services. When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular: HTTP keep-alive/close mode Route type TLS session resumption client support Number of concurrent connections per target route Number of target routes Back end server page size Underlying infrastructure (network/SDN solution, CPU, and so on) While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second. In HTTP keep-alive mode scenarios: Encryption LoadBalancerService HostNetwork none 21515 29622 edge 16743 22913 passthrough 36786 53295 re-encrypt 21583 25198 In HTTP close (no keep-alive) scenarios: Encryption LoadBalancerService HostNetwork none 5719 8273 edge 2729 4069 passthrough 4121 5344 re-encrypt 2320 2941 Default Ingress Controller configuration with ROUTER_THREADS=4 was used and two different endpoint publishing strategies (LoadBalancerService/HostNetwork) were tested. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating 1 Gbit NIC at page sizes as small as 8 kB. When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router: Number of applications Application type 5-10 static file/web server or caching proxy 100-1000 applications generating dynamic content In general, HAProxy can support routes for 5 to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content. Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier. For more information on Ingress sharding, see Configuring Ingress Controller sharding by using route labels and Configuring Ingress Controller sharding by using namespace labels . 12.2. Ingress Controller (router) performance optimizations OpenShift Container Platform no longer supports modifying Ingress Controller deployments by setting environment variables such as ROUTER_THREADS , ROUTER_DEFAULT_TUNNEL_TIMEOUT , ROUTER_DEFAULT_CLIENT_TIMEOUT , ROUTER_DEFAULT_SERVER_TIMEOUT , and RELOAD_INTERVAL . You can modify the Ingress Controller deployment, but if the Ingress Operator is enabled, the configuration is overwritten. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/routing-optimization |
function::execname | function::execname Name function::execname - Returns the execname of a target process (or group of processes) Synopsis Arguments None Description Returns the execname of a target process (or group of processes). | [
"execname:string()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-execname |
Chapter 3. Prerequisite | Chapter 3. Prerequisite For the solution to work, the following requirements must be met. All nodes must have the same: number of CPUs and RAM software configuration RHEL release firewall settings SAP HANA release (SAP HANA 2.0 SPS04 or later) The pacemaker packages are only installed on the cluster nodes and must use the same version of resource-agents-sap-hana (0.162.1 or later). To be able to support SAP HANA Multitarget System Replication , refer to Add SAP HANA Multitarget System Replication autoregister support . Also, set the following: use register_secondaries_on_takeover=true use log_mode=normal The initial setup is based on the installation guide, Automating SAP HANA Scale-Up System Replication using the RHEL HA Add-On . The system replication configuration of all SAP HANA instances is based on SAP requirements. For more information, refer to the guidelines from SAP based on the SAP HANA Administration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/asmb_preconditions_configuring-hana-scale-up-multitarget-system-replication-disaster-recovery |
probe::sunrpc.clnt.create_client | probe::sunrpc.clnt.create_client Name probe::sunrpc.clnt.create_client - Create an RPC client Synopsis sunrpc.clnt.create_client Values servername the server machine name prot the IP protocol number authflavor the authentication flavor port the port number progname the RPC program name vers the RPC program version number prog the RPC program number | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-clnt-create-client |
8.4.2. The KSM Tuning Service | 8.4.2. The KSM Tuning Service The ksmtuned service fine-tunes the kernel same-page merging (KSM) configuration by looping and adjusting ksm . In addition, the ksmtuned service is notified by libvirt when a guest virtual machine is created or destroyed. The ksmtuned service has no options. The ksmtuned service can be tuned with the retune parameter, which instructs ksmtuned to run tuning functions manually. The /etc/ksmtuned.conf file is the configuration file for the ksmtuned service. The file output below is the default ksmtuned.conf file: Within the /etc/ksmtuned.conf file, npages sets how many pages ksm will scan before the ksmd daemon becomes inactive. This value will also be set in the /sys/kernel/mm/ksm/pages_to_scan file. The KSM_THRES_CONST value represents the amount of available memory used as a threshold to activate ksm . ksmd is activated if either of the following occurs: The amount of free memory drops below the threshold, set in KSM_THRES_CONST . The amount of committed memory plus the threshold, KSM_THRES_CONST , exceeds the total amount of memory. | [
"systemctl start ksmtuned Starting ksmtuned: [ OK ]",
"Configuration file for ksmtuned. How long ksmtuned should sleep between tuning adjustments KSM_MONITOR_INTERVAL=60 Millisecond sleep between ksm scans for 16Gb server. Smaller servers sleep more, bigger sleep less. KSM_SLEEP_MSEC=10 KSM_NPAGES_BOOST - is added to the `npages` value, when `free memory` is less than `thres`. KSM_NPAGES_BOOST=300 KSM_NPAGES_DECAY - is the value given is subtracted to the `npages` value, when `free memory` is greater than `thres`. KSM_NPAGES_DECAY=-50 KSM_NPAGES_MIN - is the lower limit for the `npages` value. KSM_NPAGES_MIN=64 KSM_NPAGES_MAX - is the upper limit for the `npages` value. KSM_NPAGES_MAX=1250 KSM_THRES_COEF - is the RAM percentage to be calculated in parameter `thres`. KSM_THRES_COEF=20 KSM_THRES_CONST - If this is a low memory system, and the `thres` value is less than `KSM_THRES_CONST`, then reset `thres` value to `KSM_THRES_CONST` value. KSM_THRES_CONST=2048 uncomment the following to enable ksmtuned debug information LOGFILE=/var/log/ksmtuned DEBUG=1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-ksm-the_ksm_tuning_service |
Chapter 3. Configuring and deploying a multi-cell environment with routed networks | Chapter 3. Configuring and deploying a multi-cell environment with routed networks Important The content in this section is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview . To configure your Red Hat OpenStack (RHOSP) deployment to handle multiple cells with routed networks, you must perform the following tasks: Prepare the control plane for cell network routing on the overcloud stack. Extract parameter information from the control plane of the overcloud stack. Configure the cell network routing on the cell stacks. Create cell roles files for each stack. You can use the default Compute role as a base for the Compute nodes in a cell, and the dedicated CellController role as a base for the cell controller node. You can also create custom roles for use in your multi-cell environment. For more information on creating custom roles, see Composable services and custom roles . Designate a host for each custom role you create. Note This procedure is for an environment with a single control plane network. If your environment has multiple control plane networks, such as a spine leaf network environment, then you must also designate a host for each role in each leaf network so that you can tag nodes into each leaf. For more information, see Designating a role for leaf nodes . Configure each cell. Deploy each cell stack. 3.1. Prerequisites You have configured your undercloud for routed networks. For more information, see Configuring routed spine-leaf in the undercloud . 3.2. Preparing the control plane and default cell for cell network routing You must configure routes on the overcloud stack for the overcloud stack to communicate with the cells. To achieve this, create a network data file that defines all networks and subnets in the main stack, and use this file to deploy both the overcloud stack and the cell stacks. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Create a new directory for the common stack configuration: Copy the default network_data_subnets_routed.yaml file to your common directory to add a composable network for your overcloud stack: For more information on composable networks, see Composable networks in the Director installation and usage guide. Update the configuration in /common/network_data_routed_multi_cell.yaml for your network, and update the cell subnet names for easy identification, for example, change internal_api_leaf1 to internal_api_cell1 . Ensure that the interfaces in the NIC template for each role include <network_name>InterfaceRoutes , for example: Add the network_data_routed_multi_cell.yaml file to the overcloud stack with your other environment files and deploy the overcloud: 3.3. Extracting parameter information from the overcloud stack control plane Extract parameter information from the first cell, named default , in the basic overcloud stack. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Export the cell configuration and password information from the default cell in the overcloud stack to a new common environment file for the multi-cell deployment: This command exports the EndpointMap , HostsEntry , AllNodesConfig , GlobalConfig parameters, and the password information, to the common environment file. Tip If the environment file already exists, enter the command with the --force-overwrite or -f option. 3.4. Creating cell roles files for routed networks When each stack uses a different network, create a cell roles file for each cell stack that includes a custom cell role. Note You must create a flavor for each custom role. For more information, see Designating hosts for cell roles . Procedure Generate a new roles data file that includes the CellController role, along with other roles you need for the cell stack. The following example generates the roles data file cell1_roles_data.yaml , which includes the roles CellController and Compute : Add the HostnameFormatDefault to each role definition in your new cell roles file: Add the Networking service (neutron) DHCP and Metadata agents to the ComputeCell1 and CellControllerCell1 roles, if they are not already present: Add the subnets you configured in network_data_routed_multi_cell.yaml to the ComputeCell1 and CellControllerCell1 roles: 3.5. Designating hosts for cell roles To designate a bare-metal node for a cell role, you must configure the bare-metal node with a resource class to use to tag the node for the cell role. Perform the following procedure to create a bare-metal resource class for the cellcontrollercell1 role. Repeat this procedure for each custom role, by substituting the cell controller names with the name of your custom role. Note The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, scale down the overcloud to unprovision the node, then scale up the overcloud to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes . Procedure Register the bare-metal node for the cellcontrollercell1 role by adding it to your node definition template: node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide. Retrieve a list of your nodes to identify their UUIDs: Tag each bare-metal node that you want to designate as a cell controller with a custom cell controller resource class: Replace <node> with the name or UUID of the bare-metal node. Add the cellcontrollercell1 role to your node definition file, overcloud-baremetal-deploy.yaml , and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes: Replace <role_topology_file> with the name of the network topology file to use for the cellcontrollercell1 role, for example, cell1_controller_net_top.j2 . You can reuse an existing network topology or create a new custom network interface template for the role or cell. For more information, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. To use the default network definition settings, do not include network_config in the role definition. For more information about the properties that you can use to configure node attributes in your node definition file, see Bare-metal node provisioning attributes . For an example node definition file, see Example node definition file . Provision the new nodes for your role: Optional: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. Defaults to overcloud . Optional: Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. If you have not defined the network definitions in the node definition file by using the network_config property, then the default network definitions are used. Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml . Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active : If you ran the provisioning command without the --network-config option, then configure the <Role>NetworkConfigTemplate parameters in your network-environment.yaml file to point to your NIC template files: Replace <role_topology_file> with the name of the file that contains the network topology of the cellcontrollercell1 role, for example, cell1_controller_net_top.j2 . Set to controller.j2 to use the default network topology. 3.6. Configuring and deploying each cell stack with routed networks Perform the following procedure to configure one cell stack, cell1 . Repeat the procedure for each additional cell stack you want to deploy until all your cell stacks are deployed. Procedure Create a new environment file for the additional cell in the cell directory for cell-specific parameters, for example, /home/stack/cell1/cell1.yaml . Add the following parameters to the environment file: To run the Compute metadata API in each cell instead of in the global Controller, add the following parameter to your cell environment file: Add the virtual IP address (VIP) information for the cell to your cell environment file: This creates virtual IP addresses on the subnet associated with the L2 network segment that the cell Controller nodes are connected to. Add the environment files to the stack with your other environment files and deploy the cell stack: 3.7. Adding a new cell subnet after deployment To add a new cell subnet to your overcloud stack after you have deployed your multi-cell environment, you must update the value of NetworkDeploymentActions to include 'UPDATE' . Procedure Add the following configuration to an environment file for the overcloud stack to update the network configuration with the new cell subnet: Add the configuration for the new cell subnet to /common/network_data_routed_multi_cell.yaml . Deploy the overcloud stack: Optional: Reset NetworkDeploymentActions to the default for the deployment: 3.8. steps Creating and managing the cell within the Compute service | [
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD mkdir common",
"(undercloud)USD cp /usr/share/openstack-tripleo-heat-templates/network_data_subnets_routed.yaml ~/common/network_data_routed_multi_cell.yaml",
"- type: vlan vlan_id: get_param: InternalApiNetworkVlanID addresses: - ip_netmask: get_param: InternalApiIpSubnet routes: get_param: InternalApiInterfaceRoutes",
"(undercloud)USD openstack overcloud deploy --templates --stack overcloud -e [your environment files] -n /home/stack/common/network_data_routed_multi_cell.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/overcloud-networks-deployed.yaml -e /home/stack/templates/overcloud-vip-deployed.yaml",
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud cell export --control-plane-stack overcloud -f --output-file common/default_cell_export.yaml --working-dir /home/stack/overcloud-deploy/overcloud/",
"(undercloud)USD openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o cell1/cell1_roles_data.yaml Compute:ComputeCell1 CellController:CellControllerCell1",
"- name: ComputeCell1 HostnameFormatDefault: '%stackname%-compute-cell1-%index%' ServicesDefault: networks: - name: CellControllerCell1 HostnameFormatDefault: '%stackname%-cellcontrol-cell1-%index%' ServicesDefault: networks:",
"- name: ComputeCell1 HostnameFormatDefault: '%stackname%-compute-cell1-%index%' ServicesDefault: - OS::TripleO::Services::NeutronDhcpAgent - OS::TripleO::Services::NeutronMetadataAgent networks: - name: CellControllerCell1 HostnameFormatDefault: '%stackname%-cellcontrol-cell1-%index%' ServicesDefault: - OS::TripleO::Services::NeutronDhcpAgent - OS::TripleO::Services::NeutronMetadataAgent networks:",
"- name: ComputeCell1 networks: InternalApi: subnet: internal_api_subnet_cell1 Tenant: subnet: tenant_subnet_cell1 Storage: subnet: storage_subnet_cell1 - name: CellControllerCell1 networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet_cell1 Storage: subnet: storage_subnet_cell1 StorageMgmt: subnet: storage_mgmt_subnet_cell1 Tenant: subnet: tenant_subnet_cell1",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack baremetal node list",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.CELL-CONTROLLER <node>",
"- name: cellcontrollercell1 count: 1 defaults: resource_class: baremetal.CELL1-CONTROLLER network_config: template: /home/stack/templates/nic-config/<role_topology_file> instances: - hostname: cell1-cellcontroller-%index% name: cell1controller",
"(undercloud)USD openstack overcloud node provision [--stack <stack>] [--network-config \\] --output <deployment_file> /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD watch openstack baremetal node list",
"parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 CellControllerCell1NetworkConfigTemplate: /home/stack/templates/nic-configs/<role_topology_file> ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2",
"resource_registry: OS::TripleO::CellControllerCell1::Net::SoftwareConfig: /home/stack/templates/nic-configs/cellcontroller.yaml OS::TripleO::ComputeCell1::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml parameter_defaults: # Specify that this is an additional cell NovaAdditionalCell: True # Enable local metadata API for each cell NovaLocalMetadataPerCell: True #Disable network creation in order to use the `network_data.yaml` file from the overcloud stack, # and create ports for the nodes in the separate stacks on the existing networks. ManageNetworks: false # Specify that this is an additional cell NovaAdditionalCell: True # The DNS names for the VIPs for the cell CloudDomain: redhat.local CloudName: cell1.redhat.local CloudNameInternal: cell1.internalapi.redhat.local CloudNameStorage: cell1.storage.redhat.local CloudNameStorageManagement: cell1.storagemgmt.redhat.local CloudNameCtlplane: cell1.ctlplane.redhat.local",
"parameter_defaults: NovaLocalMetadataPerCell: True",
"parameter_defaults: VipSubnetMap: InternalApi: internal_api_cell1 Storage: storage_cell1 StorageMgmt: storage_mgmt_cell1 External: external_subnet",
"(undercloud)USD openstack overcloud deploy --templates --stack cell1 -e [your environment files] -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/overcloud-networks-deployed.yaml -e /home/stack/templates/overcloud-vip-deployed.yaml -r /home/stack/cell1/cell1_roles_data.yaml -n /home/stack/common/network_data_spine_leaf.yaml -e /home/stack/common/default_cell_export.yaml -e /home/stack/cell1/cell1.yaml",
"parameter_defaults: NetworkDeploymentActions: ['CREATE','UPDATE']",
"(undercloud)USD openstack overcloud deploy --templates --stack overcloud -n /home/stack/common/network_data_routed_multi_cell.yaml -e [your environment files]",
"parameter_defaults: NetworkDeploymentActions: ['CREATE']"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/scaling_deployments_with_compute_cells/assembly_configuring-and-deploying-a-multi-cell-environment-with-routed-networks_cellsv2 |
Chapter 9. Using the config tool to reconfigure Red Hat Quay on OpenShift Container Platform | Chapter 9. Using the config tool to reconfigure Red Hat Quay on OpenShift Container Platform As of Red Hat Quay 3.10, the configuration tool has been removed on OpenShift Container Platform deployments, meaning that users cannot configure, or reconfigure, directly from the OpenShift Container Platform console. Additionally, the quay-config-editor pod no longer deploys, users cannot check the status of the config editor route, and the Config Editor Endpoint no longer generates on the Red Hat Quay Operator Details page As a workaround, you can deploy the configuration tool locally and create your own configuration bundle. This includes entering the database and storage credentials used for your Red Hat Quay on OpenShift Container Platform deployment, generating a config.yaml file, and using it to deploy Red Hat Quay on OpenShift Container Platform via the command-line interface. To deploy the configuration tool locally, see Getting started with Red Hat Quay and follow the instructions up to "Configuring Red Hat Quay". Advanced configuration settings, such as using custom SSL certificates, can be found on the same page. steps Red Hat Quay features | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-config-ui |
Chapter 9. MTV performance recommendations | Chapter 9. MTV performance recommendations The purpose of this section is to share recommendations for efficient and effective migration of virtual machines (VMs) using Migration Toolkit for Virtualization (MTV), based on findings observed through testing. The data provided here was collected from testing in Red Hat Labs and is provided for reference only. Overall, these numbers should be considered to show the best-case scenarios. The observed performance of migration can differ from these results and depends on several factors. 9.1. Ensure fast storage and network speeds Ensure fast storage and network speeds, both for VMware and Red Hat OpenShift (OCP) environments. To perform fast migrations, VMware must have fast read access to datastores. Networking between VMware ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks. Extend the VMware network to the OCP Workers Interface network environment. It is important to ensure that the VMware network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore. Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services utilize that network, it may have an impact on those services and their migration rates. For example, 200 to 325 MiB/s was the average network transfer rate from the vmnic for each ESXi host associated with transferring data to the OCP interface. 9.2. Ensure fast datastore read speeds to ensure efficient and performant migrations. Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host. Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible. 9.3. Endpoint types MTV 2.6 allows for the following vSphere provider options: ESXi endpoint (inventory and disk transfers from ESXi), introduced in MTV 2.6 vCenter Server endpoint; no networks for the ESXi host (inventory and disk transfers from vCenter) vCenter endpoint and ESXi networks are available (inventory from vCenter, disk transfers from ESXi). When transferring many VMs that are registered to multiple ESXi hosts, using the vCenter endpoint and ESXi network is suggested. Note As of vSphere 7.0, ESXi hosts can label which network to use for NBD transport. This is accomplished by tagging the desired virtual network interface card (NIC) with the appropriate vSphereBackupNFC label. When this is done, MTV will be able to utilize the ESXi interface for network transfer to Openshift as long as the worker and ESXi host interfaces are reachable. This is especially useful when migration users may not have access to the ESXi credentials yet would like to be able to control which ESXi interface is used for migration. For more details, see: (MTV-1230) You can use the following ESXi command, which designates interface vmk2 for NBD backup: esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2 9.4. Set ESXi hosts BIOS profile and ESXi Host Power Management for High Performance Where possible, ensure that hosts used to perform migrations are set with BIOS profiles related to maximum performance. Hosts which use Host Power Management controlled within vSphere should check that High Performance is set. Testing showed that when transferring more than 10 VMs with both BIOS and host power management set accordingly, migrations had an increase of 15 MiB in the average datastore read rate. 9.5. Avoid additional network load on VMware networks You can reduce the network load on VMware networks by selecting the migration network when using the ESXi endpoint. By incorporating a virtualization provider, MTV enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to OCP. Selecting this migration network from the ESXi host in the MTV UI will ensure that the transfer is performed using the selected network as an ESXi endpoint.. It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated. In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads. 9.6. Control maximum concurrent disk migrations per ESXi host. Set the MAX_VM_INFLIGHT MTV variable to control the maximum number of concurrent VMs transfers allowed for the ESXi host. MTV allows for concurrency to be controlled using this variable; by default, it is set to 20. When setting MAX_VM_INFLIGHT , consider the number of maximum concurrent VMs transfers are required for ESXi hosts. It is important to consider the type of migration to be transferred concurrently. Warm migrations, which are defined by migrations of a running VM that will be migrated over a scheduled time. Warm migrations use snapshots to compare and migrate only the differences between snapshots of the disk. The migration of the differences between snapshots happens over specific intervals before a final cut-over of the running VM to OpenShift occurs. In MTV 2.6, MAX_VM_INFLIGHT reserves one transfer slot per VM, regardless of current migration activity for a specific snapshot or the number of disks that belong to a single vm. The total set by MAX_VM_INFLIGHT is used to indicate how many concurrent VM tranfers per ESXi host is allowed. Examples MAX_VM_INFLIGHT = 20 and 2 ESXi hosts defined in the provider mean each host can transfer 20 VMs. 9.7. Migrations are completed faster when migrating multiple VMs concurrently When multiple VMs from a specific ESXi host are to be migrated, starting concurrent migrations for multiple VMs leads to faster migration times. Testing demonstrated that migrating 10 VMs (each containing 35 GiB of data, with a total size of 50 GiB) from a single host is significantly faster than migrating the same number of VMs sequentially, one after another. It is possible to increase concurrent migration to more than 10 virtual machines from a single host, but it does not show a significant improvement. Examples 1 single disk VMs took 6 minutes, with migration rate of 100 MiB/s 10 single disk VMs took 22 minutes, with migration rate of 272 MiB/s 20 single disk VMs took 42 minutes, with migration rate of 284 MiB/s Note From the aforementioned examples, it is evident that the migration of 10 virtual machines simultaneously is three times faster than the migration of identical virtual machines in a sequential manner. The migration rate was almost the same when moving 10 or 20 virtual machines simultaneously. 9.8. Migrations complete faster using multiple hosts. Using multiple hosts with registered VMs equally distributed among the ESXi hosts used for migrations leads to faster migration times. Testing showed that when transferring more than 10 single disk VMS, each containing 35 GiB of data out of a total of 50G total, using an additional host can reduce migration time. Examples 80 single disk VMs, containing 35 GiB of data each, using a single host took 2 hours and 43 minutes, with a migration rate of 294 MiB/s. 80 single disk VMs, containing 35 GiB of data each, using 8 ESXi hosts took 41 minutes, with a migration rate of 1,173 MiB/s. Note From the aforementioned examples, it is evident that migrating 80 VMs from 8 ESXi hosts, 10 from each host, concurrently is four times faster than running the same VMs from a single ESXi host. Migrating a larger number of VMs from more than 8 ESXi hosts concurrently could potentially show increased performance. However, it was not tested and therefore not recommended. 9.9. Multiple migration plans compared to a single large migration plan The maximum number of disks that can be referenced by a single migration plan is 500. For more details, see (MTV-1203) . When attempting to migrate many VMs in a single migration plan, it can take some time for all migrations to start. By breaking up one migration plan into several migration plans, it is possible to start them at the same time. Comparing migrations of: 500 VMs using 8 ESXi hosts in 1 plan, max_vm_inflight=100 , took 5 hours and 10 minutes. 800 VMs using 8 ESXi hosts with 8 plans, max_vm_inflight=100 , took 57 minutes. Testing showed that by breaking one single large plan into multiple moderately sized plans, for example, 100 VMS per plan, the total migration time can be reduced. 9.10. Maximum values tested Maximum number of ESXi hosts tested: 8 Maximum number of VMs in a single migration plan: 500 Maximum number of VMs migrated in a single test: 5000 Maximum number of migration plans performed concurrently: 40 Maximum single disk size migrated: 6 T disks, which contained 3 Tb of data Maximum number of disks on a single VM migrated: 50 Highest observed single datastore read rate from a single ESXi server: 312 MiB/second Highest observed multi-datastore read rate using eight ESXi servers and two datastores: 1,242 MiB/second Highest observed virtual NIC transfer rate to an OpenShift worker: 327 MiB/second Maximum migration transfer rate of a single disk: 162 MiB/second (rate observed when transferring nonconcurrent migration of 1.5 Tb utilized data) Maximum cold migration transfer rate of the multiple VMs (single disk) from a single ESXi host: 294 MiB/s (concurrent migration of 30 VMs, 35/50 GiB used, from Single ESXi) Maximum cold migration transfer rate of the multiple VMs (single disk) from multiple ESXi hosts: 1173MB/s (concurrent migration of 80 VMs, 35/50 GiB used, from 8 ESXi servers, 10 VMs from each ESXi) For additional details on performance, see MTV performance addendum | [
"esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2"
] | https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/installing_and_using_the_migration_toolkit_for_virtualization/mtv-performance-recommendation_mtv |
Chapter 15. IngressController [operator.openshift.io/v1] | Chapter 15. IngressController [operator.openshift.io/v1] Description IngressController describes a managed ingress controller for the cluster. The controller can service OpenShift Route and Kubernetes Ingress resources. When an IngressController is created, a new ingress controller deployment is created to allow external traffic to reach the services that expose Ingress or Route resources. Updating this resource may lead to disruption for public facing network connections as a new ingress controller revision may be rolled out. https://kubernetes.io/docs/concepts/services-networking/ingress-controllers Whenever possible, sensible defaults for the platform are used. See each field for more details. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the IngressController. status object status is the most recently observed status of the IngressController. 15.1.1. .spec Description spec is the specification of the desired behavior of the IngressController. Type object Property Type Description clientTLS object clientTLS specifies settings for requesting and verifying client certificates, which can be used to enable mutual TLS for edge-terminated and reencrypt routes. defaultCertificate object defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: tls.crt: certificate file contents tls.key: key file contents If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. The in-use certificate (whether generated or user-specified) will be automatically integrated with OpenShift's built-in OAuth server. domain string domain is a DNS name serviced by the ingress controller and is used to configure multiple features: * For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy. * When using a generated default certificate, the certificate will be valid for domain and its subdomains. See defaultCertificate. * The value is published to individual Route statuses so that end-users know where to target external DNS records. domain must be unique among all IngressControllers, and cannot be updated. If empty, defaults to ingress.config.openshift.io/cluster .spec.domain. endpointPublishingStrategy object endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork Any other platform types (including None) default to HostNetwork. endpointPublishingStrategy cannot be updated. httpCompression object httpCompression defines a policy for HTTP traffic compression. By default, there is no HTTP compression. httpEmptyRequestsPolicy string httpEmptyRequestsPolicy describes how HTTP connections should be handled if the connection times out before a request is received. Allowed values for this field are "Respond" and "Ignore". If the field is set to "Respond", the ingress controller sends an HTTP 400 or 408 response, logs the connection (if access logging is enabled), and counts the connection in the appropriate metrics. If the field is set to "Ignore", the ingress controller closes the connection without sending a response, logging the connection, or incrementing metrics. The default value is "Respond". Typically, these connections come from load balancers' health probes or Web browsers' speculative connections ("preconnect") and can be safely ignored. However, these requests may also be caused by network errors, and so setting this field to "Ignore" may impede detection and diagnosis of problems. In addition, these requests may be caused by port scans, in which case logging empty requests may aid in detecting intrusion attempts. httpErrorCodePages object httpErrorCodePages specifies a configmap with custom error pages. The administrator must create this configmap in the openshift-config namespace. This configmap should have keys in the format "error-page-<error code>.http", where <error code> is an HTTP error code. For example, "error-page-503.http" defines an error page for HTTP 503 responses. Currently only error pages for 503 and 404 responses can be customized. Each value in the configmap should be the full response, including HTTP headers. Eg- https://raw.githubusercontent.com/openshift/router/fadab45747a9b30cc3f0a4b41ad2871f95827a93/images/router/haproxy/conf/error-page-503.http If this field is empty, the ingress controller uses the default error pages. httpHeaders object httpHeaders defines policy for HTTP headers. If this field is empty, the default values are used. logging object logging defines parameters for what should be logged where. If this field is empty, operational logs are enabled but access logs are disabled. namespaceSelector object namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. nodePlacement object nodePlacement enables explicit control over the scheduling of the ingress controller. If unset, defaults are used. See NodePlacement for more details. replicas integer replicas is the desired number of ingress controller replicas. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. The value of replicas is set based on the value of a chosen field in the Infrastructure CR. If defaultPlacement is set to ControlPlane, the chosen field will be controlPlaneTopology. If it is set to Workers the chosen field will be infrastructureTopology. Replicas will then be set to 1 or 2 based whether the chosen field's value is SingleReplica or HighlyAvailable, respectively. These defaults are subject to change. routeAdmission object routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). If empty, defaults will be applied. See specific routeAdmission fields for details about their defaults. routeSelector object routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. tlsSecurityProfile object tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. tuningOptions object tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. Setting fields within tuningOptions is generally not recommended. The default values are suitable for most configurations. unsupportedConfigOverrides `` unsupportedConfigOverrides allows specifying unsupported configuration options. Its use is unsupported. 15.1.2. .spec.clientTLS Description clientTLS specifies settings for requesting and verifying client certificates, which can be used to enable mutual TLS for edge-terminated and reencrypt routes. Type object Required clientCA clientCertificatePolicy Property Type Description allowedSubjectPatterns array (string) allowedSubjectPatterns specifies a list of regular expressions that should be matched against the distinguished name on a valid client certificate to filter requests. The regular expressions must use PCRE syntax. If this list is empty, no filtering is performed. If the list is nonempty, then at least one pattern must match a client certificate's distinguished name or else the ingress controller rejects the certificate and denies the connection. clientCA object clientCA specifies a configmap containing the PEM-encoded CA certificate bundle that should be used to verify a client's certificate. The administrator must create this configmap in the openshift-config namespace. clientCertificatePolicy string clientCertificatePolicy specifies whether the ingress controller requires clients to provide certificates. This field accepts the values "Required" or "Optional". Note that the ingress controller only checks client certificates for edge-terminated and reencrypt TLS routes; it cannot check certificates for cleartext HTTP or passthrough TLS routes. 15.1.3. .spec.clientTLS.clientCA Description clientCA specifies a configmap containing the PEM-encoded CA certificate bundle that should be used to verify a client's certificate. The administrator must create this configmap in the openshift-config namespace. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 15.1.4. .spec.defaultCertificate Description defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: tls.crt: certificate file contents tls.key: key file contents If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. The in-use certificate (whether generated or user-specified) will be automatically integrated with OpenShift's built-in OAuth server. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 15.1.5. .spec.endpointPublishingStrategy Description endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork Any other platform types (including None) default to HostNetwork. endpointPublishingStrategy cannot be updated. Type object Required type Property Type Description hostNetwork object hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. loadBalancer object loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. nodePort object nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. private object private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. type string type is the publishing strategy to use. Valid values are: * LoadBalancerService Publishes the ingress controller using a Kubernetes LoadBalancer Service. In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. * HostNetwork Publishes the ingress controller on node ports where the ingress controller is deployed. In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring an external load balancer to publish the ingress controller via the node ports. * Private Does not publish the ingress controller. In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. * NodePortService Publishes the ingress controller using a Kubernetes NodePort Service. In this configuration, the ingress controller deployment uses container networking. A NodePort Service is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift; however, to support static port allocations, user changes to the node port field of the managed NodePort Service will preserved. 15.1.6. .spec.endpointPublishingStrategy.hostNetwork Description hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. Type object Property Type Description httpPort integer httpPort is the port on the host which should be used to listen for HTTP requests. This field should be set when port 80 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 80. httpsPort integer httpsPort is the port on the host which should be used to listen for HTTPS requests. This field should be set when port 443 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 443. protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. statsPort integer statsPort is the port on the host where the stats from the router are published. The value should not coincide with the NodePort range of the cluster. If an external load balancer is configured to forward connections to this IngressController, the load balancer should use this port for health checks. The load balancer can send HTTP probes on this port on a given node, with the path /healthz/ready to determine if the ingress controller is ready to receive traffic on the node. For proper operation the load balancer must not forward traffic to a node until the health check reports ready. The load balancer should also stop forwarding requests within a maximum of 45 seconds after /healthz/ready starts reporting not-ready. Probing every 5 to 10 seconds, with a 5-second timeout and with a threshold of two successful or failed requests to become healthy or unhealthy respectively, are well-tested values. When the value is 0 or is not specified it defaults to 1936. 15.1.7. .spec.endpointPublishingStrategy.loadBalancer Description loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. Type object Required dnsManagementPolicy scope Property Type Description allowedSourceRanges `` allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. To facilitate migration from earlier versions of OpenShift that did not have the allowedSourceRanges field, you may set the service.beta.kubernetes.io/load-balancer-source-ranges annotation on the "router-<ingresscontroller name>" service in the "openshift-ingress" namespace, and this annotation will take effect if allowedSourceRanges is empty on OpenShift 4.12. dnsManagementPolicy string dnsManagementPolicy indicates if the lifecycle of the wildcard DNS record associated with the load balancer service will be managed by the ingress operator. It defaults to Managed. Valid values are: Managed and Unmanaged. providerParameters object providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. scope string scope indicates the scope at which the load balancer is exposed. Possible values are "External" and "Internal". 15.1.8. .spec.endpointPublishingStrategy.loadBalancer.providerParameters Description providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. Type object Required type Property Type Description aws object aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. gcp object gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. ibm object ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. type string type is the underlying infrastructure provider for the load balancer. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "IBM", "Nutanix", "OpenStack", and "VSphere". 15.1.9. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws Description aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. Type object Required type Property Type Description classicLoadBalancer object classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. networkLoadBalancer object networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. type string type is the type of AWS load balancer to instantiate for an ingresscontroller. Valid values are: * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb 15.1.10. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.classicLoadBalancer Description classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. Type object Property Type Description connectionIdleTimeout string connectionIdleTimeout specifies the maximum time period that a connection may be idle before the load balancer closes the connection. The value must be parseable as a time duration value; see https://pkg.go.dev/time#ParseDuration . A nil or zero value means no opinion, in which case a default value is used. The default value for this field is 60s. This default is subject to change. 15.1.11. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.networkLoadBalancer Description networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. Type object 15.1.12. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.gcp Description gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. Type object Property Type Description clientAccess string clientAccess describes how client access is restricted for internal load balancers. Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. https://cloud.google.com/load-balancing/docs/internal#client_access 15.1.13. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.ibm Description ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. Type object Property Type Description protocol string protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. Valid values for protocol are TCP, PROXY and omitted. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is TCP, without the proxy protocol enabled. 15.1.14. .spec.endpointPublishingStrategy.nodePort Description nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.15. .spec.endpointPublishingStrategy.private Description private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.16. .spec.httpCompression Description httpCompression defines a policy for HTTP traffic compression. By default, there is no HTTP compression. Type object Property Type Description mimeTypes array (string) mimeTypes is a list of MIME types that should have compression applied. This list can be empty, in which case the ingress controller does not apply compression. Note: Not all MIME types benefit from compression, but HAProxy will still use resources to try to compress if instructed to. Generally speaking, text (html, css, js, etc.) formats benefit from compression, but formats that are already compressed (image, audio, video, etc.) benefit little in exchange for the time and cpu spent on compressing again. See https://joehonton.medium.com/the-gzip-penalty-d31bd697f1a2 15.1.17. .spec.httpErrorCodePages Description httpErrorCodePages specifies a configmap with custom error pages. The administrator must create this configmap in the openshift-config namespace. This configmap should have keys in the format "error-page-<error code>.http", where <error code> is an HTTP error code. For example, "error-page-503.http" defines an error page for HTTP 503 responses. Currently only error pages for 503 and 404 responses can be customized. Each value in the configmap should be the full response, including HTTP headers. Eg- https://raw.githubusercontent.com/openshift/router/fadab45747a9b30cc3f0a4b41ad2871f95827a93/images/router/haproxy/conf/error-page-503.http If this field is empty, the ingress controller uses the default error pages. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 15.1.18. .spec.httpHeaders Description httpHeaders defines policy for HTTP headers. If this field is empty, the default values are used. Type object Property Type Description actions object actions specifies options for modifying headers and their values. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be modified for TLS passthrough connections. Setting the HSTS ( Strict-Transport-Security ) header is not supported via actions. Strict-Transport-Security may only be configured using the "haproxy.router.openshift.io/hsts_header" route annotation, and only in accordance with the policy specified in Ingress.Spec.RequiredHSTSPolicies. Any actions defined here are applied after any actions related to the following other fields: cache-control, spec.clientTLS, spec.httpHeaders.forwardedHeaderPolicy, spec.httpHeaders.uniqueId, and spec.httpHeaders.headerNameCaseAdjustments. In case of HTTP request headers, the actions specified in spec.httpHeaders.actions on the Route will be executed after the actions specified in the IngressController's spec.httpHeaders.actions field. In case of HTTP response headers, the actions specified in spec.httpHeaders.actions on the IngressController will be executed after the actions specified in the Route's spec.httpHeaders.actions field. Headers set using this API cannot be captured for use in access logs. The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. Please refer to the documentation for that API field for more details. forwardedHeaderPolicy string forwardedHeaderPolicy specifies when and how the IngressController sets the Forwarded, X-Forwarded-For, X-Forwarded-Host, X-Forwarded-Port, X-Forwarded-Proto, and X-Forwarded-Proto-Version HTTP headers. The value may be one of the following: * "Append", which specifies that the IngressController appends the headers, preserving existing headers. * "Replace", which specifies that the IngressController sets the headers, replacing any existing Forwarded or X-Forwarded-* headers. * "IfNone", which specifies that the IngressController sets the headers if they are not already set. * "Never", which specifies that the IngressController never sets the headers, preserving any existing headers. By default, the policy is "Append". headerNameCaseAdjustments `` headerNameCaseAdjustments specifies case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying "X-Forwarded-For" indicates that the "x-forwarded-for" HTTP header should be adjusted to have the specified capitalization. These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted. uniqueId object uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. If this field is empty, no such header is injected into requests. 15.1.19. .spec.httpHeaders.actions Description actions specifies options for modifying headers and their values. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be modified for TLS passthrough connections. Setting the HSTS ( Strict-Transport-Security ) header is not supported via actions. Strict-Transport-Security may only be configured using the "haproxy.router.openshift.io/hsts_header" route annotation, and only in accordance with the policy specified in Ingress.Spec.RequiredHSTSPolicies. Any actions defined here are applied after any actions related to the following other fields: cache-control, spec.clientTLS, spec.httpHeaders.forwardedHeaderPolicy, spec.httpHeaders.uniqueId, and spec.httpHeaders.headerNameCaseAdjustments. In case of HTTP request headers, the actions specified in spec.httpHeaders.actions on the Route will be executed after the actions specified in the IngressController's spec.httpHeaders.actions field. In case of HTTP response headers, the actions specified in spec.httpHeaders.actions on the IngressController will be executed after the actions specified in the Route's spec.httpHeaders.actions field. Headers set using this API cannot be captured for use in access logs. The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. Please refer to the documentation for that API field for more details. Type object Property Type Description request array request is a list of HTTP request headers to modify. Actions defined here will modify the request headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for request headers will be executed before Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 request header actions may be configured. Sample fetchers allowed are "req.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[req.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". request[] object IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. response array response is a list of HTTP response headers to modify. Actions defined here will modify the response headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for response headers will be executed after Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 response header actions may be configured. Sample fetchers allowed are "res.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[res.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". response[] object IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. 15.1.20. .spec.httpHeaders.actions.request Description request is a list of HTTP request headers to modify. Actions defined here will modify the request headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for request headers will be executed before Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 request header actions may be configured. Sample fetchers allowed are "req.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[req.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Type array 15.1.21. .spec.httpHeaders.actions.request[] Description IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. Type object Required action name Property Type Description action object action specifies actions to perform on headers, such as setting or deleting headers. name string name specifies the name of a header on which to perform an action. Its value must be a valid HTTP header name as defined in RFC 2616 section 4.2. The name must consist only of alphanumeric and the following special characters, "-!#USD%&'*+.^_`". The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. It must be no more than 255 characters in length. Header name must be unique. 15.1.22. .spec.httpHeaders.actions.request[].action Description action specifies actions to perform on headers, such as setting or deleting headers. Type object Required type Property Type Description set object set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. type string type defines the type of the action to be applied on the header. Possible values are Set or Delete. Set allows you to set HTTP request and response headers. Delete allows you to delete HTTP request and response headers. 15.1.23. .spec.httpHeaders.actions.request[].action.set Description set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. Type object Required value Property Type Description value string value specifies a header value. Dynamic values can be added. The value will be interpreted as an HAProxy format string as defined in http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#8.2.6 and may use HAProxy's %[] syntax and otherwise must be a valid HTTP header value as defined in https://datatracker.ietf.org/doc/html/rfc7230#section-3.2 . The value of this field must be no more than 16384 characters in length. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. 15.1.24. .spec.httpHeaders.actions.response Description response is a list of HTTP response headers to modify. Actions defined here will modify the response headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for response headers will be executed after Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 response header actions may be configured. Sample fetchers allowed are "res.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[res.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Type array 15.1.25. .spec.httpHeaders.actions.response[] Description IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. Type object Required action name Property Type Description action object action specifies actions to perform on headers, such as setting or deleting headers. name string name specifies the name of a header on which to perform an action. Its value must be a valid HTTP header name as defined in RFC 2616 section 4.2. The name must consist only of alphanumeric and the following special characters, "-!#USD%&'*+.^_`". The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. It must be no more than 255 characters in length. Header name must be unique. 15.1.26. .spec.httpHeaders.actions.response[].action Description action specifies actions to perform on headers, such as setting or deleting headers. Type object Required type Property Type Description set object set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. type string type defines the type of the action to be applied on the header. Possible values are Set or Delete. Set allows you to set HTTP request and response headers. Delete allows you to delete HTTP request and response headers. 15.1.27. .spec.httpHeaders.actions.response[].action.set Description set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. Type object Required value Property Type Description value string value specifies a header value. Dynamic values can be added. The value will be interpreted as an HAProxy format string as defined in http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#8.2.6 and may use HAProxy's %[] syntax and otherwise must be a valid HTTP header value as defined in https://datatracker.ietf.org/doc/html/rfc7230#section-3.2 . The value of this field must be no more than 16384 characters in length. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. 15.1.28. .spec.httpHeaders.uniqueId Description uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. If this field is empty, no such header is injected into requests. Type object Property Type Description format string format specifies the format for the injected HTTP header's value. This field has no effect unless name is specified. For the HAProxy-based ingress controller implementation, this format uses the same syntax as the HTTP log format. If the field is empty, the default value is "%{+X}o\\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid"; see the corresponding HAProxy documentation: http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#8.2.3 name string name specifies the name of the HTTP header (for example, "unique-id") that the ingress controller should inject into HTTP requests. The field's value must be a valid HTTP header name as defined in RFC 2616 section 4.2. If the field is empty, no header is injected. 15.1.29. .spec.logging Description logging defines parameters for what should be logged where. If this field is empty, operational logs are enabled but access logs are disabled. Type object Property Type Description access object access describes how the client requests should be logged. If this field is empty, access logging is disabled. 15.1.30. .spec.logging.access Description access describes how the client requests should be logged. If this field is empty, access logging is disabled. Type object Required destination Property Type Description destination object destination is where access logs go. httpCaptureCookies `` httpCaptureCookies specifies HTTP cookies that should be captured in access logs. If this field is empty, no cookies are captured. httpCaptureHeaders object httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be captured for TLS passthrough connections. httpLogFormat string httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation: http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#8.2.3 Note that this format only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). It does not affect the log format for TLS passthrough connections. logEmptyRequests string logEmptyRequests specifies how connections on which no request is received should be logged. Typically, these empty requests come from load balancers' health probes or Web browsers' speculative connections ("preconnect"), in which case logging these requests may be undesirable. However, these requests may also be caused by network errors, in which case logging empty requests may be useful for diagnosing the errors. In addition, these requests may be caused by port scans, in which case logging empty requests may aid in detecting intrusion attempts. Allowed values for this field are "Log" and "Ignore". The default value is "Log". 15.1.31. .spec.logging.access.destination Description destination is where access logs go. Type object Required type Property Type Description container object container holds parameters for the Container logging destination. Present only if type is Container. syslog object syslog holds parameters for a syslog endpoint. Present only if type is Syslog. type string type is the type of destination for logs. It must be one of the following: * Container The ingress operator configures the sidecar container named "logs" on the ingress controller pod and configures the ingress controller to write logs to the sidecar. The logs are then available as container logs. The expectation is that the administrator configures a custom logging solution that reads logs from this sidecar. Note that using container logs means that logs may be dropped if the rate of logs exceeds the container runtime's or the custom logging solution's capacity. * Syslog Logs are sent to a syslog endpoint. The administrator must specify an endpoint that can receive syslog messages. The expectation is that the administrator has configured a custom syslog instance. 15.1.32. .spec.logging.access.destination.container Description container holds parameters for the Container logging destination. Present only if type is Container. Type object Property Type Description maxLength integer maxLength is the maximum length of the log message. Valid values are integers in the range 480 to 8192, inclusive. When omitted, the default value is 1024. 15.1.33. .spec.logging.access.destination.syslog Description syslog holds parameters for a syslog endpoint. Present only if type is Syslog. Type object Required address port Property Type Description address string address is the IP address of the syslog endpoint that receives log messages. facility string facility specifies the syslog facility of log messages. If this field is empty, the facility is "local1". maxLength integer maxLength is the maximum length of the log message. Valid values are integers in the range 480 to 4096, inclusive. When omitted, the default value is 1024. port integer port is the UDP port number of the syslog endpoint that receives log messages. 15.1.34. .spec.logging.access.httpCaptureHeaders Description httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be captured for TLS passthrough connections. Type object Property Type Description request `` request specifies which HTTP request headers to capture. If this field is empty, no request headers are captured. response `` response specifies which HTTP response headers to capture. If this field is empty, no response headers are captured. 15.1.35. .spec.namespaceSelector Description namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.36. .spec.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.37. .spec.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.38. .spec.nodePlacement Description nodePlacement enables explicit control over the scheduling of the ingress controller. If unset, defaults are used. See NodePlacement for more details. Type object Property Type Description nodeSelector object nodeSelector is the node selector applied to ingress controller deployments. If set, the specified selector is used and replaces the default. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. When defaultPlacement is Workers, the default is: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' When defaultPlacement is ControlPlane, the default is: kubernetes.io/os: linux node-role.kubernetes.io/master: '' These defaults are subject to change. Note that using nodeSelector.matchExpressions is not supported. Only nodeSelector.matchLabels may be used. This is a limitation of the Kubernetes API: the pod spec does not allow complex expressions for node selectors. tolerations array tolerations is a list of tolerations applied to ingress controller deployments. The default is an empty list. See https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 15.1.39. .spec.nodePlacement.nodeSelector Description nodeSelector is the node selector applied to ingress controller deployments. If set, the specified selector is used and replaces the default. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. When defaultPlacement is Workers, the default is: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' When defaultPlacement is ControlPlane, the default is: kubernetes.io/os: linux node-role.kubernetes.io/master: '' These defaults are subject to change. Note that using nodeSelector.matchExpressions is not supported. Only nodeSelector.matchLabels may be used. This is a limitation of the Kubernetes API: the pod spec does not allow complex expressions for node selectors. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.40. .spec.nodePlacement.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.41. .spec.nodePlacement.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.42. .spec.nodePlacement.tolerations Description tolerations is a list of tolerations applied to ingress controller deployments. The default is an empty list. See https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Type array 15.1.43. .spec.nodePlacement.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 15.1.44. .spec.routeAdmission Description routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). If empty, defaults will be applied. See specific routeAdmission fields for details about their defaults. Type object Property Type Description namespaceOwnership string namespaceOwnership describes how host name claims across namespaces should be handled. Value must be one of: - Strict: Do not allow routes in different namespaces to claim the same host. - InterNamespaceAllowed: Allow routes to claim different paths of the same host name across namespaces. If empty, the default is Strict. wildcardPolicy string wildcardPolicy describes how routes with wildcard policies should be handled for the ingress controller. WildcardPolicy controls use of routes [1] exposed by the ingress controller based on the route's wildcard policy. [1] https://github.com/openshift/api/blob/master/route/v1/types.go Note: Updating WildcardPolicy from WildcardsAllowed to WildcardsDisallowed will cause admitted routes with a wildcard policy of Subdomain to stop working. These routes must be updated to a wildcard policy of None to be readmitted by the ingress controller. WildcardPolicy supports WildcardsAllowed and WildcardsDisallowed values. If empty, defaults to "WildcardsDisallowed". 15.1.45. .spec.routeSelector Description routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.46. .spec.routeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.47. .spec.routeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.48. .spec.tlsSecurityProfile Description tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. Type object Property Type Description custom `` custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate `` intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2 modern `` modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: TLSv1.3 NOTE: Currently unsupported. old `` old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 - DHE-RSA-CHACHA20-POLY1305 - ECDHE-ECDSA-AES128-SHA256 - ECDHE-RSA-AES128-SHA256 - ECDHE-ECDSA-AES128-SHA - ECDHE-RSA-AES128-SHA - ECDHE-ECDSA-AES256-SHA384 - ECDHE-RSA-AES256-SHA384 - ECDHE-ECDSA-AES256-SHA - ECDHE-RSA-AES256-SHA - DHE-RSA-AES128-SHA256 - DHE-RSA-AES256-SHA256 - AES128-GCM-SHA256 - AES256-GCM-SHA384 - AES128-SHA256 - AES256-SHA256 - AES128-SHA - AES256-SHA - DES-CBC3-SHA minTLSVersion: TLSv1.0 type string type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. Note that the Modern profile is currently not supported because it is not yet well adopted by common software libraries. 15.1.49. .spec.tuningOptions Description tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. Setting fields within tuningOptions is generally not recommended. The default values are suitable for most configurations. Type object Property Type Description clientFinTimeout string clientFinTimeout defines how long a connection will be held open while waiting for the client response to the server/backend closing the connection. If unset, the default timeout is 1s clientTimeout string clientTimeout defines how long a connection will be held open while waiting for a client response. If unset, the default timeout is 30s connectTimeout string ConnectTimeout defines the maximum time to wait for a connection attempt to a server/backend to succeed. This field expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, e.g. "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". When omitted, this means the user has no opinion and the platform is left to choose a reasonable default. This default is subject to change over time. The current default is 5s. headerBufferBytes integer headerBufferBytes describes how much memory should be reserved (in bytes) for IngressController connection sessions. Note that this value must be at least 16384 if HTTP/2 is enabled for the IngressController ( https://tools.ietf.org/html/rfc7540 ). If this field is empty, the IngressController will use a default value of 32768 bytes. Setting this field is generally not recommended as headerBufferBytes values that are too small may break the IngressController and headerBufferBytes values that are too large could cause the IngressController to use significantly more memory than necessary. headerBufferMaxRewriteBytes integer headerBufferMaxRewriteBytes describes how much memory should be reserved (in bytes) from headerBufferBytes for HTTP header rewriting and appending for IngressController connection sessions. Note that incoming HTTP requests will be limited to (headerBufferBytes - headerBufferMaxRewriteBytes) bytes, meaning headerBufferBytes must be greater than headerBufferMaxRewriteBytes. If this field is empty, the IngressController will use a default value of 8192 bytes. Setting this field is generally not recommended as headerBufferMaxRewriteBytes values that are too small may break the IngressController and headerBufferMaxRewriteBytes values that are too large could cause the IngressController to use significantly more memory than necessary. healthCheckInterval string healthCheckInterval defines how long the router waits between two consecutive health checks on its configured backends. This value is applied globally as a default for all routes, but may be overridden per-route by the route annotation "router.openshift.io/haproxy.health.check.interval". Expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, eg "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". Setting this to less than 5s can cause excess traffic due to too frequent TCP health checks and accompanying SYN packet storms. Alternatively, setting this too high can result in increased latency, due to backend servers that are no longer available, but haven't yet been detected as such. An empty or zero healthCheckInterval means no opinion and IngressController chooses a default, which is subject to change over time. Currently the default healthCheckInterval value is 5s. Currently the minimum allowed value is 1s and the maximum allowed value is 2147483647ms (24.85 days). Both are subject to change over time. maxConnections integer maxConnections defines the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod to handle more connections but at the cost of additional system resources being consumed. Permitted values are: empty, 0, -1, and the range 2000-2000000. If this field is empty or 0, the IngressController will use the default value of 50000, but the default is subject to change in future releases. If the value is -1 then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. Selecting -1 (i.e., auto) will result in a large value being computed (~520000 on OpenShift >=4.10 clusters) and therefore each HAProxy process will incur significant memory usage compared to the current default of 50000. Setting a value that is greater than the current operating system limit will prevent the HAProxy process from starting. If you choose a discrete value (e.g., 750000) and the router pod is migrated to a new node, there's no guarantee that that new node has identical ulimits configured. In such a scenario the pod would fail to start. If you have nodes with different ulimits configured (e.g., different tuned profiles) and you choose a discrete value then the guidance is to use -1 and let the value be computed dynamically at runtime. You can monitor memory usage for router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}'. You can monitor memory usage of individual HAProxy processes in router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}/container_processes{container="router",namespace="openshift-ingress"}'. reloadInterval string reloadInterval defines the minimum interval at which the router is allowed to reload to accept new changes. Increasing this value can prevent the accumulation of HAProxy processes, depending on the scenario. Increasing this interval can also lessen load imbalance on a backend's servers when using the roundrobin balancing algorithm. Alternatively, decreasing this value may decrease latency since updates to HAProxy's configuration can take effect more quickly. The value must be a time duration value; see https://pkg.go.dev/time#ParseDuration . Currently, the minimum value allowed is 1s, and the maximum allowed value is 120s. Minimum and maximum allowed values may change in future versions of OpenShift. Note that if a duration outside of these bounds is provided, the value of reloadInterval will be capped/floored and not rejected (e.g. a duration of over 120s will be capped to 120s; the IngressController will not reject and replace this disallowed value with the default). A zero value for reloadInterval tells the IngressController to choose the default, which is currently 5s and subject to change without notice. This field expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, e.g. "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". Note: Setting a value significantly larger than the default of 5s can cause latency in observing updates to routes and their endpoints. HAProxy's configuration will be reloaded less frequently, and newly created routes will not be served until the subsequent reload. serverFinTimeout string serverFinTimeout defines how long a connection will be held open while waiting for the server/backend response to the client closing the connection. If unset, the default timeout is 1s serverTimeout string serverTimeout defines how long a connection will be held open while waiting for a server/backend response. If unset, the default timeout is 30s threadCount integer threadCount defines the number of threads created per HAProxy process. Creating more threads allows each ingress controller pod to handle more connections, at the cost of more system resources being used. HAProxy currently supports up to 64 threads. If this field is empty, the IngressController will use the default value. The current default is 4 threads, but this may change in future releases. Setting this field is generally not recommended. Increasing the number of HAProxy threads allows ingress controller pods to utilize more CPU time under load, potentially starving other pods if set too high. Reducing the number of threads may cause the ingress controller to perform poorly. tlsInspectDelay string tlsInspectDelay defines how long the router can hold data to find a matching route. Setting this too short can cause the router to fall back to the default certificate for edge-terminated or reencrypt routes even when a better matching certificate could be used. If unset, the default inspect delay is 5s tunnelTimeout string tunnelTimeout defines how long a tunnel connection (including websockets) will be held open while the tunnel is idle. If unset, the default timeout is 1h 15.1.50. .status Description status is the most recently observed status of the IngressController. Type object Property Type Description availableReplicas integer availableReplicas is number of observed available replicas according to the ingress controller deployment. conditions array conditions is a list of conditions and their status. Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) There are additional conditions which indicate the status of other ingress controller features and capabilities. * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. * DNSReady - True if the following conditions are met: * DNS is managed. * DNS records have been successfully created. - False if any of those conditions are unsatisfied. conditions[] object OperatorCondition is just the standard condition fields. domain string domain is the actual domain in use. endpointPublishingStrategy object endpointPublishingStrategy is the actual strategy in use. namespaceSelector object namespaceSelector is the actual namespaceSelector in use. observedGeneration integer observedGeneration is the most recent generation observed. routeSelector object routeSelector is the actual routeSelector in use. selector string selector is a label selector, in string format, for ingress controller pods corresponding to the IngressController. The number of matching pods should equal the value of availableReplicas. tlsProfile object tlsProfile is the TLS connection configuration that is in effect. 15.1.51. .status.conditions Description conditions is a list of conditions and their status. Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) There are additional conditions which indicate the status of other ingress controller features and capabilities. * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. * DNSReady - True if the following conditions are met: * DNS is managed. * DNS records have been successfully created. - False if any of those conditions are unsatisfied. Type array 15.1.52. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 15.1.53. .status.endpointPublishingStrategy Description endpointPublishingStrategy is the actual strategy in use. Type object Required type Property Type Description hostNetwork object hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. loadBalancer object loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. nodePort object nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. private object private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. type string type is the publishing strategy to use. Valid values are: * LoadBalancerService Publishes the ingress controller using a Kubernetes LoadBalancer Service. In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. * HostNetwork Publishes the ingress controller on node ports where the ingress controller is deployed. In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring an external load balancer to publish the ingress controller via the node ports. * Private Does not publish the ingress controller. In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. * NodePortService Publishes the ingress controller using a Kubernetes NodePort Service. In this configuration, the ingress controller deployment uses container networking. A NodePort Service is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift; however, to support static port allocations, user changes to the node port field of the managed NodePort Service will preserved. 15.1.54. .status.endpointPublishingStrategy.hostNetwork Description hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. Type object Property Type Description httpPort integer httpPort is the port on the host which should be used to listen for HTTP requests. This field should be set when port 80 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 80. httpsPort integer httpsPort is the port on the host which should be used to listen for HTTPS requests. This field should be set when port 443 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 443. protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. statsPort integer statsPort is the port on the host where the stats from the router are published. The value should not coincide with the NodePort range of the cluster. If an external load balancer is configured to forward connections to this IngressController, the load balancer should use this port for health checks. The load balancer can send HTTP probes on this port on a given node, with the path /healthz/ready to determine if the ingress controller is ready to receive traffic on the node. For proper operation the load balancer must not forward traffic to a node until the health check reports ready. The load balancer should also stop forwarding requests within a maximum of 45 seconds after /healthz/ready starts reporting not-ready. Probing every 5 to 10 seconds, with a 5-second timeout and with a threshold of two successful or failed requests to become healthy or unhealthy respectively, are well-tested values. When the value is 0 or is not specified it defaults to 1936. 15.1.55. .status.endpointPublishingStrategy.loadBalancer Description loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. Type object Required dnsManagementPolicy scope Property Type Description allowedSourceRanges `` allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. To facilitate migration from earlier versions of OpenShift that did not have the allowedSourceRanges field, you may set the service.beta.kubernetes.io/load-balancer-source-ranges annotation on the "router-<ingresscontroller name>" service in the "openshift-ingress" namespace, and this annotation will take effect if allowedSourceRanges is empty on OpenShift 4.12. dnsManagementPolicy string dnsManagementPolicy indicates if the lifecycle of the wildcard DNS record associated with the load balancer service will be managed by the ingress operator. It defaults to Managed. Valid values are: Managed and Unmanaged. providerParameters object providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. scope string scope indicates the scope at which the load balancer is exposed. Possible values are "External" and "Internal". 15.1.56. .status.endpointPublishingStrategy.loadBalancer.providerParameters Description providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. Type object Required type Property Type Description aws object aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. gcp object gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. ibm object ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. type string type is the underlying infrastructure provider for the load balancer. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "IBM", "Nutanix", "OpenStack", and "VSphere". 15.1.57. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws Description aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. Type object Required type Property Type Description classicLoadBalancer object classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. networkLoadBalancer object networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. type string type is the type of AWS load balancer to instantiate for an ingresscontroller. Valid values are: * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb 15.1.58. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws.classicLoadBalancer Description classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. Type object Property Type Description connectionIdleTimeout string connectionIdleTimeout specifies the maximum time period that a connection may be idle before the load balancer closes the connection. The value must be parseable as a time duration value; see https://pkg.go.dev/time#ParseDuration . A nil or zero value means no opinion, in which case a default value is used. The default value for this field is 60s. This default is subject to change. 15.1.59. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws.networkLoadBalancer Description networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. Type object 15.1.60. .status.endpointPublishingStrategy.loadBalancer.providerParameters.gcp Description gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. Type object Property Type Description clientAccess string clientAccess describes how client access is restricted for internal load balancers. Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. https://cloud.google.com/load-balancing/docs/internal#client_access 15.1.61. .status.endpointPublishingStrategy.loadBalancer.providerParameters.ibm Description ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. Type object Property Type Description protocol string protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. Valid values for protocol are TCP, PROXY and omitted. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is TCP, without the proxy protocol enabled. 15.1.62. .status.endpointPublishingStrategy.nodePort Description nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.63. .status.endpointPublishingStrategy.private Description private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.64. .status.namespaceSelector Description namespaceSelector is the actual namespaceSelector in use. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.65. .status.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.66. .status.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.67. .status.routeSelector Description routeSelector is the actual routeSelector in use. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.68. .status.routeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.69. .status.routeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.70. .status.tlsProfile Description tlsProfile is the TLS connection configuration that is in effect. Type object Property Type Description ciphers array (string) ciphers is used to specify the cipher algorithms that are negotiated during the TLS handshake. Operators may remove entries their operands do not support. For example, to use DES-CBC3-SHA (yaml): ciphers: - DES-CBC3-SHA minTLSVersion string minTLSVersion is used to specify the minimal version of the TLS protocol that is negotiated during the TLS handshake. For example, to use TLS versions 1.1, 1.2 and 1.3 (yaml): minTLSVersion: TLSv1.1 NOTE: currently the highest minTLSVersion allowed is VersionTLS12 15.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/ingresscontrollers GET : list objects of kind IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers DELETE : delete collection of IngressController GET : list objects of kind IngressController POST : create an IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name} DELETE : delete an IngressController GET : read the specified IngressController PATCH : partially update the specified IngressController PUT : replace the specified IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/scale GET : read scale of the specified IngressController PATCH : partially update scale of the specified IngressController PUT : replace scale of the specified IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/status GET : read status of the specified IngressController PATCH : partially update status of the specified IngressController PUT : replace status of the specified IngressController 15.2.1. /apis/operator.openshift.io/v1/ingresscontrollers Table 15.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind IngressController Table 15.2. HTTP responses HTTP code Reponse body 200 - OK IngressControllerList schema 401 - Unauthorized Empty 15.2.2. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers Table 15.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 15.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of IngressController Table 15.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind IngressController Table 15.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.8. HTTP responses HTTP code Reponse body 200 - OK IngressControllerList schema 401 - Unauthorized Empty HTTP method POST Description create an IngressController Table 15.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.10. Body parameters Parameter Type Description body IngressController schema Table 15.11. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 202 - Accepted IngressController schema 401 - Unauthorized Empty 15.2.3. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name} Table 15.12. Global path parameters Parameter Type Description name string name of the IngressController namespace string object name and auth scope, such as for teams and projects Table 15.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an IngressController Table 15.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 15.15. Body parameters Parameter Type Description body DeleteOptions schema Table 15.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IngressController Table 15.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.18. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IngressController Table 15.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 15.20. Body parameters Parameter Type Description body Patch schema Table 15.21. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IngressController Table 15.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.23. Body parameters Parameter Type Description body IngressController schema Table 15.24. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 401 - Unauthorized Empty 15.2.4. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/scale Table 15.25. Global path parameters Parameter Type Description name string name of the IngressController namespace string object name and auth scope, such as for teams and projects Table 15.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified IngressController Table 15.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.28. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified IngressController Table 15.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 15.30. Body parameters Parameter Type Description body Patch schema Table 15.31. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified IngressController Table 15.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.33. Body parameters Parameter Type Description body Scale schema Table 15.34. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 15.2.5. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/status Table 15.35. Global path parameters Parameter Type Description name string name of the IngressController namespace string object name and auth scope, such as for teams and projects Table 15.36. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified IngressController Table 15.37. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.38. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified IngressController Table 15.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 15.40. Body parameters Parameter Type Description body Patch schema Table 15.41. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified IngressController Table 15.42. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.43. Body parameters Parameter Type Description body IngressController schema Table 15.44. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operator_apis/ingresscontroller-operator-openshift-io-v1 |
B.18. dracut | B.18. dracut B.18.1. RHEA-2011:0141 - dracut enhancement update Updated dracut packages that add an enhancement are now available for Red Hat Enterprise Linux 6. The dracut packages provide an event-driven initramfs generator infrastructure based around udev. The initramfs is loaded together with the kernel at boot time and initializes the system, so it can read and boot from the root partition. Enhancement BZ# 661298 The dracut packages have been updated to support the new kernel boot option, "rdinsmodpost=[module]", which allows a user to specify a kernel module to be loaded after all device drivers are loaded automatically. Users of dracut are advised to upgrade to these updated packages, which add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/dracut |
Chapter 24. OpenShift SDN network plugin | Chapter 24. OpenShift SDN network plugin 24.1. About the OpenShift SDN network plugin Part of Red Hat OpenShift Networking, OpenShift SDN is a network plugin that uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by OpenShift SDN, which configures an overlay network using Open vSwitch (OVS). 24.1.1. OpenShift SDN network isolation modes OpenShift SDN provides three SDN modes for configuring the pod network: Network policy mode allows project administrators to configure their own isolation policies using NetworkPolicy objects. Network policy is the default mode in OpenShift Container Platform 4.12. Multitenant mode provides project-level isolation for pods and services. Pods from different projects cannot send packets to or receive packets from pods and services of a different project. You can disable isolation for a project, allowing it to send network traffic to all pods and services in the entire cluster and receive network traffic from those pods and services. Subnet mode provides a flat pod network where every pod can communicate with every other pod and service. The network policy mode provides the same functionality as subnet mode. 24.1.2. Supported network plugin feature matrix Red Hat OpenShift Networking offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins: Table 24.1. Default CNI network plugin feature comparison Feature OpenShift SDN OVN-Kubernetes Egress IPs Supported Supported Egress firewall Supported Supported [1] Egress router Supported Supported [2] Hybrid networking Not supported Supported IPsec encryption for intra-cluster communication Not supported Supported IPv4 single-stack Supported Supported IPv6 single-stack Not supported Supported [3] IPv4/IPv6 dual-stack Not Supported Supported [4] IPv6/IPv4 dual-stack Not supported Supported [5] Kubernetes network policy Supported Supported Kubernetes network policy logs Not supported Supported Hardware offloading Not supported Supported Multicast Supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Egress router for OVN-Kubernetes supports only redirect mode. IPv6 single-stack networking on a bare-metal platform. IPv4/IPv6 dual-stack networking on bare-metal, IBM Power(R), and IBM Z(R) platforms. IPv6/IPv4 dual-stack networking on bare-metal and IBM Power(R) platforms. 24.2. Migrating to the OpenShift SDN network plugin As a cluster administrator, you can migrate to the OpenShift SDN network plugin from the OVN-Kubernetes network plugin. To learn more about OpenShift SDN, read About the OpenShift SDN network plugin . 24.2.1. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 24.2. Migrating to OpenShift SDN from OVN-Kubernetes User-initiated steps Migration activity Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OpenShiftSDN . Make sure the migration field is null before setting it to a value. Cluster Network Operator (CNO) Updates the status of the Network.config.openshift.io CR named cluster accordingly. Machine Config Operator (MCO) Rolls out an update to the systemd configuration necessary for OpenShift SDN; the MCO updates a single machine per pool at a time by default, causing the total time the migration takes to increase with the size of the cluster. Update the networkType field of the Network.config.openshift.io CR. CNO Performs the following actions: Destroys the OVN-Kubernetes control plane pods. Deploys the OpenShift SDN control plane pods. Updates the Multus objects to reflect the new network plugin. Reboot each node in the cluster. Cluster As nodes reboot, the cluster assigns IP addresses to pods on the OpenShift SDN cluster network. 24.2.2. Migrating to the OpenShift SDN network plugin Cluster administrators can roll back to the OpenShift SDN Container Network Interface (CNI) network plugin by using the offline migration method. During the migration you must manually reboot every node in your cluster. With the offline migration method, there is some downtime, during which your cluster is unreachable. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin. A recent backup of the etcd database is available. A reboot can be triggered manually for each node. The cluster is in a known good state, without any errors. Procedure Stop all of the machine configuration pools managed by the Machine Config Operator (MCO): Stop the master configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": true } }' Stop the worker machine configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec":{ "paused": true } }' To prepare for the migration, set the migration field to null by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' Check that the migration status is empty for the Network.config.openshift.io object by entering the following command in your CLI. Empty command output indicates that the object is not in a migration operation. USD oc get Network.config cluster -o jsonpath='{.status.migration}' Apply the patch to the Network.operator.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN" } } }' Important If you applied the patch to the Network.config.openshift.io object before the patch operation finalizes on the Network.operator.openshift.io object, the Cluster Network Operator (CNO) enters into a degradation state and this causes a slight delay until the CNO recovers from the degraded state. Confirm that the migration status of the network plugin for the Network.config.openshift.io cluster object is OpenShiftSDN by entering the following command in your CLI: USD oc get Network.config cluster -o jsonpath='{.status.migration.networkType}' Apply the patch to the Network.config.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.config.openshift.io cluster --type='merge' \ --patch '{ "spec": { "networkType": "OpenShiftSDN" } }' Optional: Disable automatic migration of several OVN-Kubernetes capabilities to the OpenShift SDN equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OpenShift SDN to meet your network infrastructure requirements: Maximum transmission unit (MTU) VXLAN port To customize either or both of the previously noted settings, customize and enter the following command in your CLI. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":<mtu>, "vxlanPort":<port> }}}}' mtu The MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 50 less than the smallest node MTU value. port The UDP port for the VXLAN overlay network. If a value is not specified, the default is 4789 . The port cannot be the same as the Geneve port that is used by OVN-Kubernetes. The default value for the Geneve port is 6081 . Example patch command USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":1200 }}}}' Reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Wait until the Multus daemon set rollout completes. Run the following command to see your rollout status: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out After the nodes in your cluster have rebooted and the multus pods are rolled out, start all of the machine configuration pools by running the following commands:: Start the master configuration pool: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": false } }' Start the worker configuration pool: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec": { "paused": false } }' As the MCO updates machines in each config pool, it reboots each node. By default the MCO updates a single machine per pool at a time, so the time that the migration requires to complete grows with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command in your CLI: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command in your CLI: USD oc get machineconfig <config_name> -o yaml where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. Confirm that the migration succeeded: To confirm that the network plugin is OpenShift SDN, enter the following command in your CLI. The value of status.networkType must be OpenShiftSDN . USD oc get Network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command in your CLI: USD oc get nodes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command in your CLI: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. To display the pod log for each machine config daemon pod shown in the output, enter the following command in your CLI: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To confirm that your pods are not in an error state, enter the following command in your CLI: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the Cluster Network Operator configuration object, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove the OVN-Kubernetes configuration, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "ovnKubernetesConfig":null } } }' To remove the OVN-Kubernetes network provider namespace, enter the following command in your CLI: USD oc delete namespace openshift-ovn-kubernetes 24.2.3. Additional resources Configuration parameters for the OpenShift SDN network plugin Backing up etcd About network policy OpenShift SDN capabilities Configuring egress IPs for a project Configuring an egress firewall for a project Enabling multicast for a project Network [operator.openshift.io/v1 ] 24.3. Rolling back to the OVN-Kubernetes network plugin As a cluster administrator, you can rollback to the OVN-Kubernetes network plugin from the OpenShift SDN network plugin if the migration to OpenShift SDN is unsuccessful. To learn more about OVN-Kubernetes, read About the OVN-Kubernetes network plugin . 24.3.1. Migrating to the OVN-Kubernetes network plugin As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster. Important While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable. Prerequisites You have a cluster configured with the OpenShift SDN CNI network plugin in the network policy isolation mode. You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have a recent backup of the etcd database. You can manually reboot each node. You checked that your cluster is in a known good state without any errors. You created a security group rule that allows User Datagram Protocol (UDP) packets on port 6081 for all nodes on all cloud platforms. Procedure To backup the configuration for the cluster network, enter the following command: USD oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml Verify that the OVN_SDN_MIGRATION_TIMEOUT environment variable is set and is equal to 0s by running the following command: #!/bin/bash if [ -n "USDOVN_SDN_MIGRATION_TIMEOUT" ] && [ "USDOVN_SDN_MIGRATION_TIMEOUT" = "0s" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout "USDco_timeout" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && \ oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && \ oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo "Some ClusterOperators Degraded=False,Progressing=True,or Available=False"; done EOT Remove the configuration from the Cluster Network Operator (CNO) configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{"spec":{"migration":null}}' Delete the NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) that defines the primary network interface for the OpenShift SDN network plugin by completing the following steps: Check that the existing NNCP CR bonded the primary interface to your cluster by entering the following command: USD oc get nncp Example output NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured Network Manager stores the connection profile for the bonded primary interface in the /etc/NetworkManager/system-connections system path. Remove the NNCP from your cluster: USD oc delete nncp <nncp_manifest_filename> To prepare all the nodes for the migration, set the migration field on the CNO configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes" } } }' Note This step does not deploy OVN-Kubernetes immediately. Instead, specifying the migration field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster in preparation for the OVN-Kubernetes deployment. Check that the reboot is finished by running the following command: USD oc get mcp Check that all cluster Operators are available by running the following command: USD oc get co Alternatively: You can disable automatic migration of several OpenShift SDN capabilities to the OVN-Kubernetes equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements: Maximum transmission unit (MTU). Consider the following before customizing the MTU for this optional step: If you use the default MTU, and you want to keep the default MTU during migration, this step can be ignored. If you used a custom MTU, and you want to keep the custom MTU during migration, you must declare the custom MTU value in this step. This step does not work if you want to change the MTU value during migration. Instead, you must first follow the instructions for "Changing the cluster MTU". You can then keep the custom MTU value by performing this procedure and declaring the custom MTU value in this step. Note OpenShift-SDN and OVN-Kubernetes have different overlay overhead. MTU values should be selected by following the guidelines found on the "MTU value selection" page. Geneve (Generic Network Virtualization Encapsulation) overlay network port OVN-Kubernetes IPv4 internal subnet To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":<mtu>, "genevePort":<port>, "v4InternalSubnet":"<ipv4_subnet>" }}}}' where: mtu The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 100 less than the smallest node MTU value. port The UDP port for the Geneve overlay network. If a value is not specified, the default is 6081 . The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is 4789 . ipv4_subnet An IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is 100.64.0.0/16 . Example patch command to update mtu field USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":1200 }}}}' As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. Display the pod log for the first machine config daemon pod shown in the output by enter the following command: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands: To specify the network provider without changing the cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }' To specify a different cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "clusterNetwork": [ { "cidr": "<cidr>", "hostPrefix": <prefix> } ], "networkType": "OVNKubernetes" } }' where cidr is a CIDR block and prefix is the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with the 100.64.0.0/16 CIDR block because the OVN-Kubernetes network provider uses this block internally. Important You cannot change the service network address block during the migration. Verify that the Multus daemon set rollout is complete before continuing with subsequent steps: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out To complete changing the network plugin, reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: Important The following scripts reboot all of the nodes in the cluster at the same time. This can cause your cluster to be unstable. Another option is to reboot your nodes manually one at a time. Rebooting nodes one-by-one causes considerable downtime in a cluster with many nodes. Cluster Operators will not work correctly before you reboot the nodes. With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Confirm that the migration succeeded: To confirm that the network plugin is OVN-Kubernetes, enter the following command. The value of status.networkType must be OVNKubernetes . USD oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command: USD oc get nodes To confirm that your pods are not in an error state, enter the following command: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. To confirm that all of the cluster Operators are not in an abnormal state, enter the following command: USD oc get co The status of every cluster Operator must be the following: AVAILABLE="True" , PROGRESSING="False" , DEGRADED="False" . If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the CNO configuration object, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove custom configuration for the OpenShift SDN network provider, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }' To remove the OpenShift SDN network provider namespace, enter the following command: USD oc delete namespace openshift-sdn steps Optional: After cluster migration, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. For more information, see "Converting to IPv4/IPv6 dual-stack networking". 24.4. Configuring egress IPs for a project As a cluster administrator, you can configure the OpenShift SDN Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a project. 24.4.1. Egress IP address architectural design and implementation The OpenShift Container Platform egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network. For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server. An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations. In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project. An egress IP address is implemented as an additional IP address on the primary network interface of a node and must be in the same subnet as the primary IP address of the node. The additional IP address must not be assigned to any other node in the cluster. Important Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0 . 24.4.1.1. Platform support Support for the egress IP address functionality on various platforms is summarized in the following table: Platform Supported Bare metal Yes VMware vSphere Yes Red Hat OpenStack Platform (RHOSP) Yes Amazon Web Services (AWS) Yes Google Cloud Platform (GCP) Yes Microsoft Azure Yes Important The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). ( BZ#2039656 ) 24.4.1.2. Public cloud platform considerations For clusters provisioned on public cloud infrastructure, there is a constraint on the absolute number of assignable IP addresses per node. The maximum number of assignable IP addresses per node, or the IP capacity , can be described in the following formula: IP capacity = public cloud default capacity - sum(current IP assignments) While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, for a cluster installed on bare-metal infrastructure with 8 nodes you can configure 150 egress IP addresses. However, if a public cloud provider limits IP address capacity to 10 IP addresses per node, the total number of assignable IP addresses is only 80. To achieve the same IP address capacity in this example cloud provider, you would need to allocate 7 additional nodes. To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml command. The cloud.network.openshift.io/egress-ipconfig annotation includes capacity and subnet information for the node. The annotation value is an array with a single object with fields that provide the following information for the primary network interface: interface : Specifies the interface ID on AWS and Azure and the interface name on GCP. ifaddr : Specifies the subnet mask for one or both IP address families. capacity : Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses. Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in OpenShift Container Platform 4.12. Note The RHOSP egress IP address feature creates a Neutron reservation port called egressip-<IP address> . Using the same RHOSP user as the one used for the OpenShift Container Platform cluster installation, you can assign a floating IP address to this reservation port to have a predictable SNAT address for egress traffic. When an egress IP address on an RHOSP network is moved from one node to another, because of a node failover, for example, the Neutron reservation port is removed and recreated. This means that the floating IP association is lost and you need to manually reassign the floating IP address to the new reservation port. Note When an RHOSP cluster administrator assigns a floating IP to the reservation port, OpenShift Container Platform cannot delete the reservation port. The CloudPrivateIPConfig object cannot perform delete and move operations until an RHOSP cluster administrator unassigns the floating IP from the reservation port. The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability. Example cloud.network.openshift.io/egress-ipconfig annotation on AWS cloud.network.openshift.io/egress-ipconfig: [ { "interface":"eni-078d267045138e436", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ipv4":14,"ipv6":15} } ] Example cloud.network.openshift.io/egress-ipconfig annotation on GCP cloud.network.openshift.io/egress-ipconfig: [ { "interface":"nic0", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ip":14} } ] The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation. 24.4.1.2.1. Amazon Web Services (AWS) IP address capacity limits On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type 24.4.1.2.2. Google Cloud Platform (GCP) IP address capacity limits On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity. The following capacity limits exist for IP aliasing assignment: Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 100. Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000. For more information, see Per instance quotas and Alias IP ranges overview . 24.4.1.2.3. Microsoft Azure IP address capacity limits On Azure, the following capacity limits exist for IP address assignment: Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256. Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536. For more information, see Networking limits . 24.4.1.3. Limitations The following limitations apply when using egress IP addresses with the OpenShift SDN network plugin: You cannot use manually assigned and automatically assigned egress IP addresses on the same nodes. If you manually assign egress IP addresses from an IP address range, you must not make that range available for automatic IP assignment. You cannot share egress IP addresses across multiple namespaces using the OpenShift SDN egress IP address implementation. If you need to share IP addresses across namespaces, the OVN-Kubernetes network plugin egress IP address implementation allows you to span IP addresses across multiple namespaces. Note If you use OpenShift SDN in multitenant mode, you cannot use egress IP addresses with any namespace that is joined to another namespace by the projects that are associated with them. For example, if project1 and project2 are joined by running the oc adm pod-network join-projects --to=project1 project2 command, neither project can use an egress IP address. For more information, see BZ#1645577 . 24.4.1.4. IP address assignment approaches You can assign egress IP addresses to namespaces by setting the egressIPs parameter of the NetNamespace object. After an egress IP address is associated with a project, OpenShift SDN allows you to assign egress IP addresses to hosts in two ways: In the automatically assigned approach, an egress IP address range is assigned to a node. In the manually assigned approach, a list of one or more egress IP address is assigned to a node. Namespaces that request an egress IP address are matched with nodes that can host those egress IP addresses, and then the egress IP addresses are assigned to those nodes. If the egressIPs parameter is set on a NetNamespace object, but no node hosts that egress IP address, then egress traffic from the namespace will be dropped. High availability of nodes is automatic. If a node that hosts an egress IP address is unreachable and there are nodes that are able to host that egress IP address, then the egress IP address will move to a new node. When the unreachable node comes back online, the egress IP address automatically moves to balance egress IP addresses across nodes. 24.4.1.4.1. Considerations when using automatically assigned egress IP addresses When using the automatic assignment approach for egress IP addresses the following considerations apply: You set the egressCIDRs parameter of each node's HostSubnet resource to indicate the range of egress IP addresses that can be hosted by a node. OpenShift Container Platform sets the egressIPs parameter of the HostSubnet resource based on the IP address range you specify. If the node hosting the namespace's egress IP address is unreachable, OpenShift Container Platform will reassign the egress IP address to another node with a compatible egress IP address range. The automatic assignment approach works best for clusters installed in environments with flexibility in associating additional IP addresses with nodes. 24.4.1.4.2. Considerations when using manually assigned egress IP addresses This approach allows you to control which nodes can host an egress IP address. Note If your cluster is installed on public cloud infrastructure, you must ensure that each node that you assign egress IP addresses to has sufficient spare capacity to host the IP addresses. For more information, see "Platform considerations" in a section. When using the manual assignment approach for egress IP addresses the following considerations apply: You set the egressIPs parameter of each node's HostSubnet resource to indicate the IP addresses that can be hosted by a node. Multiple egress IP addresses per namespace are supported. If a namespace has multiple egress IP addresses and those addresses are hosted on multiple nodes, the following additional considerations apply: If a pod is on a node that is hosting an egress IP address, that pod always uses the egress IP address on the node. If a pod is not on a node that is hosting an egress IP address, that pod uses an egress IP address at random. 24.4.2. Configuring automatically assigned egress IP addresses for a namespace In OpenShift Container Platform you can enable automatic assignment of an egress IP address for a specific namespace across one or more nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object with the egress IP address using the following JSON: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign project1 to an IP address of 192.168.1.100 and project2 to an IP address of 192.168.1.101: USD oc patch netnamespace project1 --type=merge -p \ '{"egressIPs": ["192.168.1.100"]}' USD oc patch netnamespace project2 --type=merge -p \ '{"egressIPs": ["192.168.1.101"]}' Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Indicate which nodes can host egress IP addresses by setting the egressCIDRs parameter for each host using the following JSON: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressCIDRs": [ "<ip_address_range>", "<ip_address_range>" ] }' where: <node_name> Specifies a node name. <ip_address_range> Specifies an IP address range in CIDR format. You can specify more than one address range for the egressCIDRs array. For example, to set node1 and node2 to host egress IP addresses in the range 192.168.1.0 to 192.168.1.255: USD oc patch hostsubnet node1 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' USD oc patch hostsubnet node2 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' OpenShift Container Platform automatically assigns specific egress IP addresses to available nodes in a balanced way. In this case, it assigns the egress IP address 192.168.1.100 to node1 and the egress IP address 192.168.1.101 to node2 or vice versa. 24.4.3. Configuring manually assigned egress IP addresses for a namespace In OpenShift Container Platform you can associate one or more egress IP addresses with a namespace. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object by specifying the following JSON object with the desired IP addresses: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign the project1 project to the IP addresses 192.168.1.100 and 192.168.1.101 : USD oc patch netnamespace project1 --type=merge \ -p '{"egressIPs": ["192.168.1.100","192.168.1.101"]}' To provide high availability, set the egressIPs value to two or more IP addresses on different nodes. If multiple egress IP addresses are set, then pods use all egress IP addresses roughly equally. Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Manually assign the egress IP address to the node hosts. If your cluster is installed on public cloud infrastructure, you must confirm that the node has available IP address capacity. Set the egressIPs parameter on the HostSubnet object on the node host. Using the following JSON, include as many IP addresses as you want to assign to that node host: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>", "<ip_address>" ] }' where: <node_name> Specifies a node name. <ip_address> Specifies an IP address. You can specify more than one IP address for the egressIPs array. For example, to specify that node1 should have the egress IPs 192.168.1.100 , 192.168.1.101 , and 192.168.1.102 : USD oc patch hostsubnet node1 --type=merge -p \ '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}' In the example, all egress traffic for project1 will be routed to the node hosting the specified egress IP, and then connected through Network Address Translation (NAT) to that IP address. 24.4.4. Additional resources If you are configuring manual egress IP address assignment, see Platform considerations for information about IP capacity planning. 24.5. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. 24.5.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public internet. A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. Note Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. You configure an egress firewall policy by creating an EgressNetworkPolicy custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address. The following example illustrates the order of the egress firewall rules necessary to ensure API server access: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Important You must have OpenShift SDN configured to use either the network policy or multitenant mode to configure an egress firewall. If you use network policy mode, an egress firewall is compatible with only one policy per namespace and will not work with projects that share a network, such as global projects. Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 24.5.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressNetworkPolicy object. Important The creation of more than one EgressNetworkPolicy object is allowed, however it should not be done. When you create more than one EgressNetworkPolicy object, the following message is returned: dropping all rules . In actuality, all external traffic is dropped, which can cause security risks for your organization. A maximum of one EgressNetworkPolicy object with a maximum of 1,000 rules can be defined per project. The default project cannot use an egress firewall. When using the OpenShift SDN network plugin in multitenant mode, the following limitations apply: Global projects cannot use an egress firewall. You can make a project global by using the oc adm pod-network make-projects-global command. Projects merged by using the oc adm pod-network join-projects command cannot use an egress firewall in any of the joined projects. If you create a selectorless service and manually define endpoints or EndpointSlices that point to external IPs, traffic to the service IP might still be allowed, even if your EgressNetworkPolicy is configured to deny all egress traffic. This occurs because OpenShift SDN does not fully enforce egress network policies for these external endpoints. Consequently, this might result in unexpected access to external services. Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 24.5.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 24.5.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 seconds. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL that is less than 30 seconds, the controller sets the duration to the returned value. If the TTL in the response is greater than 30 minutes, the controller sets the duration to 30 minutes. If the TTL is between 30 seconds and 30 minutes, the controller ignores the value and sets the duration to 30 seconds. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressNetworkPolicy objects is only recommended for domains with infrequent IP address changes. Note The egress firewall always allows pods access to the external interface of the node that the pod is on for DNS resolution. If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS server's IP addresses. if you are using domain names in your pods. 24.5.2. EgressNetworkPolicy custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressNetworkPolicy CR object: EgressNetworkPolicy object apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2 ... 1 A name for your egress firewall policy. 2 A collection of one or more egress network policy rules as described in the following section. 24.5.2.1. EgressNetworkPolicy rules The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format or a domain name. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule. A value for either the cidrSelector field or the dnsName field for the rule. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A domain name. 24.5.2.2. Example EgressNetworkPolicy CR objects The following example defines several egress firewall policy rules: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. 24.5.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressNetworkPolicy object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressNetworkPolicy object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressnetworkpolicy.network.openshift.io/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 24.6. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 24.6.1. Viewing an EgressNetworkPolicy object You can view an EgressNetworkPolicy object in your cluster. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressNetworkPolicy objects defined in your cluster, enter the following command: USD oc get egressnetworkpolicy --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressnetworkpolicy <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 24.7. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 24.7.1. Editing an EgressNetworkPolicy object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Optional: If you did not save a copy of the EgressNetworkPolicy object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressNetworkPolicy object. Replace <filename> with the name of the file containing the updated EgressNetworkPolicy object. USD oc replace -f <filename>.yaml 24.8. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. 24.8.1. Removing an EgressNetworkPolicy object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Enter the following command to delete the EgressNetworkPolicy object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressnetworkpolicy <name> 24.9. Considerations for the use of an egress router pod 24.9.1. About an egress router pod The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses. Note The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software. Important The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. 24.9.1.1. Egress router modes In redirect mode , an egress router pod configures iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. In DNS proxy mode , an egress router pod runs as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. To make use of the reserved, source IP address, client pods must be modified to connect to the egress router pod rather than connecting directly to the destination IP address. This modification ensures that external destinations treat traffic as though it were coming from a known source. Redirect mode works for all services except for HTTP and HTTPS. For HTTP and HTTPS services, use HTTP proxy mode. For TCP-based services with IP addresses or domain names, use DNS proxy mode. 24.9.1.2. Egress router pod implementation The egress router pod setup is performed by an initialization container. That container runs in a privileged context so that it can configure the macvlan interface and set up iptables rules. After the initialization container finishes setting up the iptables rules, it exits. the egress router pod executes the container to handle the egress router traffic. The image used varies depending on the egress router mode. The environment variables determine which addresses the egress-router image uses. The image configures the macvlan interface to use EGRESS_SOURCE as its IP address, with EGRESS_GATEWAY as the IP address for the gateway. Network Address Translation (NAT) rules are set up so that connections to the cluster IP address of the pod on any TCP or UDP port are redirected to the same port on IP address specified by the EGRESS_DESTINATION variable. If only some of the nodes in your cluster are capable of claiming the specified source IP address and using the specified gateway, you can specify a nodeName or nodeSelector to identify which nodes are acceptable. 24.9.1.3. Deployment considerations An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address. Red Hat OpenStack Platform (RHOSP) If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail : USD openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid> Red Hat Virtualization (RHV) If you are using RHV , you must select No Network Filter for the Virtual network interface controller (vNIC). VMware vSphere If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches . View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: MAC Address Changes Forged Transits Promiscuous Mode Operation 24.9.1.4. Failover configuration To avoid downtime, you can deploy an egress router pod with a Deployment resource, as in the following example. To create a new Service object for the example deployment, use the oc expose deployment/egress-demo-controller command. apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: "true" spec: 2 initContainers: ... containers: ... 1 Ensure that replicas is set to 1 , because only one pod can use a given egress source IP address at any time. This means that only a single copy of the router runs on a node. 2 Specify the Pod object template for the egress router pod. 24.9.2. Additional resources Deploying an egress router in redirection mode Deploying an egress router in HTTP proxy mode Deploying an egress router in DNS proxy mode 24.10. Deploying an egress router pod in redirect mode As a cluster administrator, you can deploy an egress router pod that is configured to redirect traffic to specified destination IP addresses. 24.10.1. Egress router pod specification for redirect mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in redirect mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 External server to direct traffic to. Using this example, connections to the pod are redirected to 203.0.113.25 , with a source IP address of 192.168.12.99 . Example egress router pod specification apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 24.10.2. Egress destination configuration format When an egress router pod is deployed in redirect mode, you can specify redirection rules by using one or more of the following formats: <port> <protocol> <ip_address> - Incoming connections to the given <port> should be redirected to the same port on the given <ip_address> . <protocol> is either tcp or udp . <port> <protocol> <ip_address> <remote_port> - As above, except that the connection is redirected to a different <remote_port> on <ip_address> . <ip_address> - If the last line is a single IP address, then any connections on any other port will be redirected to the corresponding port on that IP address. If there is no fallback IP address then connections on other ports are rejected. In the example that follows several rules are defined: The first line redirects traffic from local port 80 to port 80 on 203.0.113.25 . The second and third lines redirect local ports 8080 and 8443 to remote ports 80 and 443 on 203.0.113.26 . The last line matches traffic for any ports not specified in the rules. Example configuration 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 24.10.3. Deploying an egress router pod in redirect mode In redirect mode , an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1 Your pods can now connect to this service. Their connections are redirected to the corresponding ports on the external server, using the reserved egress IP address. 24.10.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 24.11. Deploying an egress router pod in HTTP proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified HTTP and HTTPS-based services. 24.11.1. Egress router pod specification for HTTP mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in HTTP mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |- ... ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container. 24.11.2. Egress destination configuration format When an egress router pod is deployed in HTTP proxy mode, you can specify redirection rules by using one or more of the following formats. Each line in the configuration specifies one group of connections to allow or deny: An IP address allows connections to that IP address, such as 192.168.1.1 . A CIDR range allows connections to that CIDR range, such as 192.168.1.0/24 . A hostname allows proxying to that host, such as www.example.com . A domain name preceded by *. allows proxying to that domain and all of its subdomains, such as *.example.com . A ! followed by any of the match expressions denies the connection instead. If the last line is * , then anything that is not explicitly denied is allowed. Otherwise, anything that is not allowed is denied. You can also use * to allow connections to all remote destinations. Example configuration !*.example.com !192.168.1.0/24 192.168.2.1 * 24.11.3. Deploying an egress router pod in HTTP proxy mode In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1 1 Ensure the http port is set to 8080 . To configure the client pod (not the egress proxy pod) to use the HTTP proxy, set the http_proxy or https_proxy variables: apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/ ... 1 The service created in the step. Note Using the http_proxy and https_proxy environment variables is not necessary for all setups. If the above does not create a working setup, then consult the documentation for the tool or software you are running in the pod. 24.11.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 24.12. Deploying an egress router pod in DNS proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified DNS names and IP addresses. 24.12.1. Egress router pod specification for DNS mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in DNS mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- ... - name: EGRESS_DNS_PROXY_DEBUG 5 value: "1" ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 Specify a list of one or more proxy destinations. 5 Optional: Specify to output the DNS proxy log output to stdout . 24.12.2. Egress destination configuration format When the router is deployed in DNS proxy mode, you specify a list of port and destination mappings. A destination may be either an IP address or a DNS name. An egress router pod supports the following formats for specifying port and destination mappings: Port and remote address You can specify a source port and a destination host by using the two field format: <port> <remote_address> . The host can be an IP address or a DNS name. If a DNS name is provided, DNS resolution occurs at runtime. For a given host, the proxy connects to the specified source port on the destination host when connecting to the destination host IP address. Port and remote address pair example 80 172.16.12.11 100 example.com Port, remote address, and remote port You can specify a source port, a destination host, and a destination port by using the three field format: <port> <remote_address> <remote_port> . The three field format behaves identically to the two field version, with the exception that the destination port can be different than the source port. Port, remote address, and remote port example 8080 192.168.60.252 80 8443 web.example.com 443 24.12.3. Deploying an egress router pod in DNS proxy mode In DNS proxy mode , an egress router pod acts as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. Create a service for the egress router pod: Create a file named egress-router-service.yaml that contains the following YAML. Set spec.ports to the list of ports that you defined previously for the EGRESS_DNS_PROXY_DESTINATION environment variable. apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: ... type: ClusterIP selector: name: egress-dns-proxy For example: apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy To create the service, enter the following command: USD oc create -f egress-router-service.yaml Pods can now connect to this service. The connections are proxied to the corresponding ports on the external server, using the reserved egress IP address. 24.12.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 24.13. Configuring an egress router pod destination list from a config map As a cluster administrator, you can define a ConfigMap object that specifies destination mappings for an egress router pod. The specific format of the configuration depends on the type of egress router pod. For details on the format, refer to the documentation for the specific egress router pod. 24.13.1. Configuring an egress router destination mappings with a config map For a large or frequently-changing set of destination mappings, you can use a config map to externally maintain the list. An advantage of this approach is that permission to edit the config map can be delegated to users without cluster-admin privileges. Because the egress router pod requires a privileged container, it is not possible for users without cluster-admin privileges to edit the pod definition directly. Note The egress router pod does not automatically update when the config map changes. You must restart the egress router pod to get updates. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file containing the mapping data for the egress router pod, as in the following example: # Egress routes for Project "Test", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27 You can put blank lines and comments into this file. Create a ConfigMap object from the file: USD oc delete configmap egress-routes --ignore-not-found USD oc create configmap egress-routes \ --from-file=destination=my-egress-destination.txt In the command, the egress-routes value is the name of the ConfigMap object to create and my-egress-destination.txt is the name of the file that the data is read from. Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project "Test", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27 Create an egress router pod definition and specify the configMapKeyRef stanza for the EGRESS_DESTINATION field in the environment stanza: ... env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination ... 24.13.2. Additional resources Redirect mode HTTP proxy mode DNS proxy mode 24.14. Enabling multicast for a project 24.14.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OpenShift SDN network plugin, you can enable multicast on a per-project basis. When using the OpenShift SDN network plugin in networkpolicy isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of NetworkPolicy objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast. Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are NetworkPolicy objects that allow communication between the projects. When using the OpenShift SDN network plugin in multitenant isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project. Multicast packets sent by a pod in one project will be delivered to pods in other projects only if each project is joined together and multicast is enabled in each joined project. 24.14.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate netnamespace <namespace> \ netnamespace.network.openshift.io/multicast-enabled=true Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener 24.15. Disabling multicast for a project 24.15.1. Disabling multicast between pods You can disable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Disable multicast by running the following command: USD oc annotate netnamespace <namespace> \ 1 netnamespace.network.openshift.io/multicast-enabled- 1 The namespace for the project you want to disable multicast for. 24.16. Configuring network isolation using OpenShift SDN When your cluster is configured to use the multitenant isolation mode for the OpenShift SDN network plugin, each project is isolated by default. Network traffic is not allowed between pods or services in different projects in multitenant isolation mode. You can change the behavior of multitenant isolation for a project in two ways: You can join one or more projects, allowing network traffic between pods and services in different projects. You can disable network isolation for a project. It will be globally accessible, accepting network traffic from pods and services in all other projects. A globally accessible project can access pods and services in all other projects. 24.16.1. Prerequisites You must have a cluster configured to use the OpenShift SDN network plugin in multitenant isolation mode. 24.16.2. Joining projects You can join two or more projects to allow network traffic between pods and services in different projects. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Use the following command to join projects to an existing project network: USD oc adm pod-network join-projects --to=<project1> <project2> <project3> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. Optional: Run the following command to view the pod networks that you have joined together: USD oc get netnamespaces Projects in the same pod-network have the same network ID in the NETID column. 24.16.3. Isolating a project You can isolate a project so that pods and services in other projects cannot access its pods and services. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure To isolate the projects in the cluster, run the following command: USD oc adm pod-network isolate-projects <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 24.16.4. Disabling network isolation for a project You can disable network isolation for a project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command for the project: USD oc adm pod-network make-projects-global <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 24.17. Configuring kube-proxy The Kubernetes network proxy (kube-proxy) runs on each node and is managed by the Cluster Network Operator (CNO). kube-proxy maintains network rules for forwarding connections for endpoints associated with services. 24.17.1. About iptables rules synchronization The synchronization period determines how frequently the Kubernetes network proxy (kube-proxy) syncs the iptables rules on a node. A sync begins when either of the following events occurs: An event occurs, such as service or endpoint is added to or removed from the cluster. The time since the last sync exceeds the sync period defined for kube-proxy. 24.17.2. kube-proxy configuration parameters You can modify the following kubeProxyConfig parameters. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. Table 24.3. Parameters Parameter Description Values Default iptablesSyncPeriod The refresh period for iptables rules. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package documentation. 30s proxyArguments.iptables-min-sync-period The minimum duration before refreshing iptables rules. This parameter ensures that the refresh does not happen too frequently. By default, a refresh starts as soon as a change that affects iptables rules occurs. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package 0s 24.17.3. Modifying the kube-proxy configuration You can modify the Kubernetes network proxy configuration for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to a running cluster with the cluster-admin role. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Modify the kubeProxyConfig parameter in the CR with your changes to the kube-proxy configuration, such as in the following example CR: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: ["30s"] Save the file and exit the text editor. The syntax is validated by the oc command when you save the file and exit the editor. If your modifications contain a syntax error, the editor opens the file and displays an error message. Enter the following command to confirm the configuration update: USD oc get networks.operator.openshift.io -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List Optional: Enter the following command to confirm that the Cluster Network Operator accepted the configuration change: USD oc get clusteroperator network Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m The AVAILABLE field is True when the configuration update is applied successfully. | [
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\": true } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\" } } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OpenShiftSDN\" } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":<mtu>, \"vxlanPort\":<port> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":1200 }}}}'",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml",
"oc get Network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"ovnKubernetesConfig\":null } } }'",
"oc delete namespace openshift-ovn-kubernetes",
"oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml",
"#!/bin/bash if [ -n \"USDOVN_SDN_MIGRATION_TIMEOUT\" ] && [ \"USDOVN_SDN_MIGRATION_TIMEOUT\" = \"0s\" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout \"USDco_timeout\" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo \"Some ClusterOperators Degraded=False,Progressing=True,or Available=False\"; done EOT",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{\"spec\":{\"migration\":null}}'",
"oc get nncp",
"NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured",
"oc delete nncp <nncp_manifest_filename>",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\" } } }'",
"oc get mcp",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port>, \"v4InternalSubnet\":\"<ipv4_subnet>\" }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"networkType\": \"OVNKubernetes\" } }'",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"openshiftSDNConfig\": null } } }'",
"oc delete namespace openshift-sdn",
"IP capacity = public cloud default capacity - sum(current IP assignments)",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]",
"oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'",
"oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}' oc patch netnamespace project2 --type=merge -p '{\"egressIPs\": [\"192.168.1.101\"]}'",
"oc patch hostsubnet <node_name> --type=merge -p '{ \"egressCIDRs\": [ \"<ip_address_range>\", \"<ip_address_range>\" ] }'",
"oc patch hostsubnet node1 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}' oc patch hostsubnet node2 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}'",
"oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'",
"oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\",\"192.168.1.101\"]}'",
"oc patch hostsubnet <node_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\", \"<ip_address>\" ] }'",
"oc patch hostsubnet node1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\", \"192.168.1.101\", \"192.168.1.102\"]}'",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2",
"egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0",
"oc create -f <policy_name>.yaml -n <project>",
"oc create -f default.yaml -n project1",
"egressnetworkpolicy.network.openshift.io/v1 created",
"oc get egressnetworkpolicy --all-namespaces",
"oc describe egressnetworkpolicy <policy_name>",
"Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0",
"oc get -n <project> egressnetworkpolicy",
"oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml",
"oc replace -f <filename>.yaml",
"oc get -n <project> egressnetworkpolicy",
"oc delete -n <project> egressnetworkpolicy <name>",
"curl <router_service_IP> <port>",
"openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>",
"apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: 2 initContainers: containers:",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod",
"apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod",
"80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27",
"curl <router_service_IP> <port>",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |-",
"!*.example.com !192.168.1.0/24 192.168.2.1 *",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1",
"apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- - name: EGRESS_DNS_PROXY_DEBUG 5 value: \"1\"",
"80 172.16.12.11 100 example.com",
"8080 192.168.60.252 80 8443 web.example.com 443",
"apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: type: ClusterIP selector: name: egress-dns-proxy",
"apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy",
"oc create -f egress-router-service.yaml",
"Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 Fallback 203.0.113.27",
"oc delete configmap egress-routes --ignore-not-found",
"oc create configmap egress-routes --from-file=destination=my-egress-destination.txt",
"apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27",
"env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination",
"oc annotate netnamespace <namespace> netnamespace.network.openshift.io/multicast-enabled=true",
"oc project <project>",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF",
"POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')",
"oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname",
"CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')",
"oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"",
"mlistener",
"oc annotate netnamespace <namespace> \\ 1 netnamespace.network.openshift.io/multicast-enabled-",
"oc adm pod-network join-projects --to=<project1> <project2> <project3>",
"oc get netnamespaces",
"oc adm pod-network isolate-projects <project1> <project2>",
"oc adm pod-network make-projects-global <project1> <project2>",
"oc edit network.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: [\"30s\"]",
"oc get networks.operator.openshift.io -o yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List",
"oc get clusteroperator network",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/openshift-sdn-network-plugin |
Chapter 88. MongoDB | Chapter 88. MongoDB Both producer and consumer are supported According to Wikipedia: "NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases and ACID guarantees." NoSQL solutions have grown in popularity in the last few years, and major extremely-used sites and services such as Facebook, LinkedIn, Twitter, etc. are known to use them extensively to achieve scalability and agility. Basically, NoSQL solutions differ from traditional RDBMS (Relational Database Management Systems) in that they don't use SQL as their query language and generally don't offer ACID-like transactional behaviour nor relational data. Instead, they are designed around the concept of flexible data structures and schemas (meaning that the traditional concept of a database table with a fixed schema is dropped), extreme scalability on commodity hardware and blazing-fast processing. MongoDB is a very popular NoSQL solution and the camel-mongodb component integrates Camel with MongoDB allowing you to interact with MongoDB collections both as a producer (performing operations on the collection) and as a consumer (consuming documents from a MongoDB collection). MongoDB revolves around the concepts of documents (not as is office documents, but rather hierarchical data defined in JSON/BSON) and collections. This component page will assume you are familiar with them. Otherwise, visit http://www.mongodb.org/ . Note The MongoDB Camel component uses Mongo Java Driver 4.x. 88.1. Dependencies When using mongodb with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mongodb-starter</artifactId> </dependency> 88.2. URI format 88.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 88.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 88.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 88.4. Component Options The MongoDB component supports 4 options, which are listed below. Name Description Default Type mongoConnection (common) Autowired Shared client used for connection. All endpoints generated from the component will share this connection client. MongoClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 88.5. Endpoint Options The MongoDB endpoint is configured using URI syntax: with the following path and query parameters: 88.5.1. Path Parameters (1 parameters) Name Description Default Type connectionBean (common) Required Sets the connection bean reference used to lookup a client for connecting to a database. String 88.5.2. Query Parameters (27 parameters) Name Description Default Type collection (common) Sets the name of the MongoDB collection to bind to this endpoint. String collectionIndex (common) Sets the collection index (JSON FORMAT : \\{ field1 : order1, field2 : order2}). String createCollection (common) Create collection during initialisation if it doesn't exist. Default is true. true boolean database (common) Sets the name of the MongoDB database to target. String hosts (common) Host address of mongodb server in host:port format. It's possible also use more than one address, as comma separated list of hosts: host1:port1,host2:port2. If hosts parameter is specified, provided connectionBean is ignored. String mongoConnection (common) Sets the connection bean used as a client for connecting to a database. MongoClient operation (common) Sets the operation this endpoint will execute against MongoDB. Enum values: findById findOneByQuery findAll findDistinct insert save update remove bulkWrite aggregate getDbStats getColStats count command MongoDbOperation outputType (common) Convert the output of the producer to the selected type : DocumentList Document or MongoIterable. DocumentList or MongoIterable applies to findAll and aggregate. Document applies to all other operations. Enum values: DocumentList Document MongoIterable MongoDbOutputType bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerType (consumer) Consumer type. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean cursorRegenerationDelay (advanced) MongoDB tailable cursors will block until new data arrives. If no new data is inserted, after some time the cursor will be automatically freed and closed by the MongoDB server. The client is expected to regenerate the cursor if needed. This value specifies the time to wait before attempting to fetch a new cursor, and if the attempt fails, how long before the attempt is made. Default value is 1000ms. 1000 long dynamicity (advanced) Sets whether this endpoint will attempt to dynamically resolve the target database and collection from the incoming Exchange properties. Can be used to override at runtime the database and collection specified on the otherwise static endpoint URI. It is disabled by default to boost performance. Enabling it will take a minimal performance hit. false boolean readPreference (advanced) Configure how MongoDB clients route read operations to the members of a replica set. Possible values are PRIMARY, PRIMARY_PREFERRED, SECONDARY, SECONDARY_PREFERRED or NEAREST. Enum values: PRIMARY PRIMARY_PREFERRED SECONDARY SECONDARY_PREFERRED NEAREST PRIMARY String writeConcern (advanced) Configure the connection bean with the level of acknowledgment requested from MongoDB for write operations to a standalone mongod, replicaset or cluster. Possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED or MAJORITY. Enum values: ACKNOWLEDGED W1 W2 W3 UNACKNOWLEDGED JOURNALED MAJORITY ACKNOWLEDGED String writeResultAsHeader (advanced) In write operations, it determines whether instead of returning WriteResult as the body of the OUT message, we transfer the IN message to the OUT and attach the WriteResult as a header. false boolean streamFilter (changeStream) Filter condition for change streams consumer. String password (security) User password for mongodb connection. String username (security) Username for mongodb connection. String persistentId (tail) One tail tracking collection can host many trackers for several tailable consumers. To keep them separate, each tracker should have its own unique persistentId. String persistentTailTracking (tail) Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The time the system is up, the endpoint will recover the cursor from the point where it last stopped slurping records. false boolean tailTrackCollection (tail) Collection where tail tracking information will be persisted. If not specified, MongoDbTailTrackingConfig#DEFAULT_COLLECTION will be used by default. String tailTrackDb (tail) Indicates what database the tail tracking mechanism will persist to. If not specified, the current database will be picked by default. Dynamicity will not be taken into account even if enabled, i.e. the tail tracking database will not vary past endpoint initialisation. String tailTrackField (tail) Field where the last tracked value will be placed. If not specified, MongoDbTailTrackingConfig#DEFAULT_FIELD will be used by default. String tailTrackIncreasingField (tail) Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. The cursor will be (re)created with a query of type: tailTrackIncreasingField greater than lastValue (possibly recovered from persistent tail tracking). Can be of type Integer, Date, String, etc. NOTE: No support for dot notation at the current time, so the field should be at the top level of the document. String 88.6. Configuration of database in Spring XML The following Spring XML creates a bean defining the connection to a MongoDB instance. Since mongo java driver 3, the WriteConcern and readPreference options are not dynamically modifiable. They are defined in the mongoClient object <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:mongo="http://www.springframework.org/schema/data/mongo" xsi:schemaLocation="http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <mongo:mongo-client id="mongoBean" host="USD{mongo.url}" port="USD{mongo.port}" credentials="USD{mongo.user}:USD{mongo.pass}@USD{mongo.dbname}"> <mongo:client-options write-concern="NORMAL" /> </mongo:mongo-client> </beans> 88.7. Sample route The following route defined in Spring XML executes the operation getDbStats on a collection. Get DB stats for specified collection <route> <from uri="direct:start" /> <!-- using bean 'mongoBean' defined above --> <to uri="mongodb:mongoBean?database=USD{mongodb.database}&collection=USD{mongodb.collection}&operation=getDbStats" /> <to uri="direct:result" /> </route> 88.8. MongoDB operations - producer endpoints 88.8.1. Query operations 88.8.1.1. findById This operation retrieves only one element from the collection whose _id field matches the content of the IN message body. The incoming object can be anything that has an equivalent to a Bson type. See http://bsonspec.org/spec.html and http://www.mongodb.org/display/DOCS/Java+Types . from("direct:findById") .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") .to("mock:resultFindById"); Please, note that the default _id is treated by Mongo as and ObjectId type, so you may need to convert it properly. from("direct:findById") .convertBodyTo(ObjectId.class) .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") .to("mock:resultFindById"); Note Supports optional parameters This operation supports projection operators. See Specifying a fields filter (projection) . 88.8.1.2. findOneByQuery Retrieve the first element from a collection matching a MongoDB query selector. If the CamelMongoDbCriteria header is set, then its value is used as the query selector . If the CamelMongoDbCriteria header is null , then the IN message body is used as the query selector. In both cases, the query selector should be of type Bson or convertible to Bson (for instance, a JSON string or HashMap ). See Type conversions for more info. Create query selectors using the Filters provided by the MongoDB Driver. 88.8.1.3. Example without a query selector (returns the first document in a collection) from("direct:findOneByQuery") .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") .to("mock:resultFindOneByQuery"); 88.8.1.4. Example with a query selector (returns the first matching document in a collection): from("direct:findOneByQuery") .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq("name", "Raul Kripalani"))) .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") .to("mock:resultFindOneByQuery"); Note Supports optional parameters This operation supports projection operators and sort clauses. See Specifying a fields filter (projection) , Specifying a sort clause. 88.8.1.5. findAll The findAll operation returns all documents matching a query, or none at all, in which case all documents contained in the collection are returned. The query object is extracted CamelMongoDbCriteria header . if the CamelMongoDbCriteria header is null the query object is extracted message body, i.e. it should be of type Bson or convertible to Bson . It can be a JSON String or a Hashmap. See Type conversions for more info. 88.8.1.5.1. Example without a query selector (returns all documents in a collection) from("direct:findAll") .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") .to("mock:resultFindAll"); 88.8.1.5.2. Example with a query selector (returns all matching documents in a collection) from("direct:findAll") .setHeader(MongoDbConstants.CRITERIA, Filters.eq("name", "Raul Kripalani")) .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") .to("mock:resultFindAll"); Paging and efficient retrieval is supported via the following headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbNumToSkip MongoDbConstants.NUM_TO_SKIP Discards a given number of elements at the beginning of the cursor. int/Integer CamelMongoDbLimit MongoDbConstants.LIMIT Limits the number of elements returned. int/Integer CamelMongoDbBatchSize MongoDbConstants.BATCH_SIZE Limits the number of elements returned in one batch. A cursor typically fetches a batch of result objects and store them locally. If batchSize is positive, it represents the size of each batch of objects retrieved. It can be adjusted to optimize performance and limit data transfer. If batchSize is negative, it will limit of number objects returned, that fit within the max batch size limit (usually 4MB), and cursor will be closed. For example if batchSize is -10, then the server will return a maximum of 10 documents and as many as can fit in 4MB, then close the cursor. Note that this feature is different from limit() in that documents must fit within a maximum size, and it removes the need to send a request to close the cursor server-side. The batch size can be changed even after a cursor is iterated, in which case the setting will apply on the batch retrieval. int/Integer CamelMongoDbAllowDiskUse MongoDbConstants.ALLOW_DISK_USE Sets allowDiskUse MongoDB flag. This is supported since MongoDB Server 4.3.1. Using this header with older MongoDB Server version can cause query to fail. boolean/Boolean 88.8.1.5.3. Example with option outputType=MongoIterable and batch size from("direct:findAll") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq("name", "Raul Kripalani"))) .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll&outputType=MongoIterable") .to("mock:resultFindAll"); The findAll operation will also return the following OUT headers to enable you to iterate through result pages if you are using paging: Header key Quick constant Description (extracted from MongoDB API doc) Data type CamelMongoDbResultTotalSize MongoDbConstants.RESULT_TOTAL_SIZE Number of objects matching the query. This does not take limit/skip into consideration. int/Integer CamelMongoDbResultPageSize MongoDbConstants.RESULT_PAGE_SIZE Number of objects matching the query. This does not take limit/skip into consideration. int/Integer Note Supports optional parameters This operation supports projection operators and sort clauses. See Specifying a fields filter (projection) , Specifying a sort clause. 88.8.1.6. count Returns the total number of objects in a collection, returning a Long as the OUT message body. The following example will count the number of records in the "dynamicCollectionName" collection. Notice how dynamicity is enabled, and as a result, the operation will not run against the "notableScientists" collection, but against the "dynamicCollectionName" collection. // from("direct:count").to("mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true"); Long result = template.requestBodyAndHeader("direct:count", "irrelevantBody", MongoDbConstants.COLLECTION, "dynamicCollectionName"); assertTrue("Result is not of type Long", result instanceof Long); You can provide a query The query object is extracted CamelMongoDbCriteria header . if the CamelMongoDbCriteria header is null the query object is extracted message body, i.e. it should be of type Bson or convertible to Bson ., and operation will return the amount of documents matching this criteria. Document query = ... Long count = template.requestBodyAndHeader("direct:count", query, MongoDbConstants.COLLECTION, "dynamicCollectionName"); 88.8.1.7. Specifying a fields filter (projection) Query operations will, by default, return the matching objects in their entirety (with all their fields). If your documents are large and you only require retrieving a subset of their fields, you can specify a field filter in all query operations, simply by setting the relevant Bson (or type convertible to Bson , such as a JSON String, Map, etc.) on the CamelMongoDbFieldsProjection header, constant shortcut: MongoDbConstants.FIELDS_PROJECTION . Here is an example that uses MongoDB's Projections to simplify the creation of Bson. It retrieves all fields except _id and boringField : // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson fieldProjection = Projection.exclude("_id", "boringField"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection); Here is an example that uses MongoDB's Projections to simplify the creation of Bson. It retrieves all fields except _id and boringField : // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson fieldProjection = Projection.exclude("_id", "boringField"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection); 88.8.1.8. Specifying a sort clause There is a often a requirement to fetch the min/max record from a collection based on sorting by a particular field that uses MongoDB's Sorts to simplify the creation of Bson. It retrieves all fields except _id and boringField : // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson sorts = Sorts.descending("_id"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.SORT_BY, sorts); In a Camel route the SORT_BY header can be used with the findOneByQuery operation to achieve the same result. If the FIELDS_PROJECTION header is also specified the operation will return a single field/value pair that can be passed directly to another component (for example, a parameterized MyBatis SELECT query). This example demonstrates fetching the temporally newest document from a collection and reducing the result to a single field, based on the documentTimestamp field: .from("direct:someTriggeringEvent") .setHeader(MongoDbConstants.SORT_BY).constant(Sorts.descending("documentTimestamp")) .setHeader(MongoDbConstants.FIELDS_PROJECTION).constant(Projection.include("documentTimestamp")) .setBody().constant("{}") .to("mongodb:myDb?database=local&collection=myDemoCollection&operation=findOneByQuery") .to("direct:aMyBatisParameterizedSelect"); 88.8.2. Create/update operations 88.8.2.1. insert Inserts an new object into the MongoDB collection, taken from the IN message body. Type conversion is attempted to turn it into Document or a List . Two modes are supported: single insert and multiple insert. For multiple insert, the endpoint will expect a List, Array or Collections of objects of any type, as long as they are - or can be converted to - Document . Example: from("direct:insert") .to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); The operation will return a WriteResult, and depending on the WriteConcern or the value of the invokeGetLastError option, getLastError() would have been called already or not. If you want to access the ultimate result of the write operation, you need to retrieve the CommandResult by calling getLastError() or getCachedLastError() on the WriteResult . Then you can verify the result by calling CommandResult.ok() , CommandResult.getErrorMessage() and/or CommandResult.getException() . Note that the new object's _id must be unique in the collection. If you don't specify the value, MongoDB will automatically generate one for you. But if you do specify it and it is not unique, the insert operation will fail (and for Camel to notice, you will need to enable invokeGetLastError or set a WriteConcern that waits for the write result). This is not a limitation of the component, but it is how things work in MongoDB for higher throughput. If you are using a custom _id , you are expected to ensure at the application level that is unique (and this is a good practice too). OID(s) of the inserted record(s) is stored in the message header under CamelMongoOid key ( MongoDbConstants.OID constant). The value stored is org.bson.types.ObjectId for single insert or java.util.List<org.bson.types.ObjectId> if multiple records have been inserted. In MongoDB Java Driver 3.x the insertOne and insertMany operation return void. The Camel insert operation return the Document or List of Documents inserted. Note that each Documents are Updated by a new OID if need. 88.8.2.2. save The save operation is equivalent to an upsert (UPdate, inSERT) operation, where the record will be updated, and if it doesn't exist, it will be inserted, all in one atomic operation. MongoDB will perform the matching based on the _id field. Beware that in case of an update, the object is replaced entirely and the usage of MongoDB's USDmodifiers is not permitted. Therefore, if you want to manipulate the object if it already exists, you have two options: perform a query to retrieve the entire object first along with all its fields (may not be efficient), alter it inside Camel and then save it. use the update operation with USDmodifiers , which will execute the update at the server-side instead. You can enable the upsert flag, in which case if an insert is required, MongoDB will apply the USDmodifiers to the filter query object and insert the result. If the document to be saved does not contain the _id attribute, the operation will be an insert, and the new _id created will be placed in the CamelMongoOid header. For example: from("direct:insert") .to("mongodb:myDb?database=flights&collection=tickets&operation=save"); // route: from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=save"); org.bson.Document docForSave = new org.bson.Document(); docForSave.put("key", "value"); Object result = template.requestBody("direct:insert", docForSave); 88.8.2.3. update Update one or multiple records on the collection. Requires a filter query and a update rules. You can define the filter using MongoDBConstants.CRITERIA header as Bson and define the update rules as Bson in Body. Note Update after enrich While defining the filter by using MongoDBConstants.CRITERIA header as Bson to query mongodb before you do update, you should notice you need to remove it from the resulting camel exchange during aggregation if you use enrich pattern with a aggregation strategy and then apply mongodb update. If you don't remove this header during aggregation and/or redefine MongoDBConstants.CRITERIA header before sending camel exchange to mongodb producer endpoint, you may end up with invalid camel exchange payload while updating mongodb. The second way Require a List<Bson> as the IN message body containing exactly 2 elements: Element 1 (index 0) ⇒ filter query ⇒ determines what objects will be affected, same as a typical query object Element 2 (index 1) ⇒ update rules ⇒ how matched objects will be updated. All modifier operations from MongoDB are supported. Note Multiupdates By default, MongoDB will only update 1 object even if multiple objects match the filter query. To instruct MongoDB to update all matching records, set the CamelMongoDbMultiUpdate IN message header to true . A header with key CamelMongoDbRecordsAffected will be returned ( MongoDbConstants.RECORDS_AFFECTED constant) with the number of records updated (copied from WriteResult.getN() ). Supports the following IN message headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbMultiUpdate MongoDbConstants.MULTIUPDATE If the update should be applied to all objects matching. See http://www.mongodb.org/display/DOCS/Atomic+Operations boolean/Boolean CamelMongoDbUpsert MongoDbConstants.UPSERT If the database should create the element if it does not exist boolean/Boolean For example, the following will update all records whose filterField field equals true by setting the value of the "scientist" field to "Darwin": // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); List<Bson> body = new ArrayList<>(); Bson filterField = Filters.eq("filterField", true); body.add(filterField); BsonDocument updateObj = new BsonDocument().append("USDset", new BsonDocument("scientist", new BsonString("Darwin"))); body.add(updateObj); Object result = template.requestBodyAndHeader("direct:update", body, MongoDbConstants.MULTIUPDATE, true); // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); Maps<String, Object> headers = new HashMap<>(2); headers.add(MongoDbConstants.MULTIUPDATE, true); headers.add(MongoDbConstants.FIELDS_FILTER, Filters.eq("filterField", true)); String updateObj = Updates.set("scientist", "Darwin");; Object result = template.requestBodyAndHeaders("direct:update", updateObj, headers); // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); String updateObj = "[{\"filterField\": true}, {\"USDset\", {\"scientist\", \"Darwin\"}}]"; Object result = template.requestBodyAndHeader("direct:update", updateObj, MongoDbConstants.MULTIUPDATE, true); 88.8.3. Delete operations 88.8.3.1. remove Remove matching records from the collection. The IN message body will act as the removal filter query, and is expected to be of type DBObject or a type convertible to it. The following example will remove all objects whose field 'conditionField' equals true, in the science database, notableScientists collection: // route: from("direct:remove").to("mongodb:myDb?database=science&collection=notableScientists&operation=remove"); Bson conditionField = Filters.eq("conditionField", true); Object result = template.requestBody("direct:remove", conditionField); A header with key CamelMongoDbRecordsAffected is returned ( MongoDbConstants.RECORDS_AFFECTED constant) with type int , containing the number of records deleted (copied from WriteResult.getN() ). 88.8.4. Bulk Write Operations 88.8.4.1. bulkWrite Performs write operations in bulk with controls for order of execution. Requires a List<WriteModel<Document>> as the IN message body containing commands for insert, update, and delete operations. The following example will insert a new scientist "Pierre Curie", update record with id "5" by setting the value of the "scientist" field to "Marie Curie" and delete record with id "3" : // route: from("direct:bulkWrite").to("mongodb:myDb?database=science&collection=notableScientists&operation=bulkWrite"); List<WriteModel<Document>> bulkOperations = Arrays.asList( new InsertOneModel<>(new Document("scientist", "Pierre Curie")), new UpdateOneModel<>(new Document("_id", "5"), new Document("USDset", new Document("scientist", "Marie Curie"))), new DeleteOneModel<>(new Document("_id", "3"))); BulkWriteResult result = template.requestBody("direct:bulkWrite", bulkOperations, BulkWriteResult.class); By default, operations are executed in order and interrupted on the first write error without processing any remaining write operations in the list. To instruct MongoDB to continue to process remaining write operations in the list, set the CamelMongoDbBulkOrdered IN message header to false . Unordered operations are executed in parallel and this behavior is not guaranteed. Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbBulkOrdered MongoDbConstants.BULK_ORDERED Perform an ordered or unordered operation execution. Defaults to true. boolean/Boolean 88.8.5. Other operations 88.8.5.1. aggregate Perform a aggregation with the given pipeline contained in the body. Aggregations could be long and heavy operations. Use with care. // route: from("direct:aggregate").to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate"); List<Bson> aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist", group("USDscientist", sum("count", 1))); from("direct:aggregate") .setBody().constant(aggregate) .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate") .to("mock:resultAggregate"); Supports the following IN message headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbBatchSize MongoDbConstants.BATCH_SIZE Sets the number of documents to return per batch. int/Integer CamelMongoDbAllowDiskUse MongoDbConstants.ALLOW_DISK_USE Enable aggregation pipeline stages to write data to temporary files. boolean/Boolean By default a List of all results is returned. This can be heavy on memory depending on the size of the results. A safer alternative is to set your outputType=MongoIterable. The Processor will see an iterable in the message body allowing it to step through the results one by one. Thus setting a batch size and returning an iterable allows for efficient retrieval and processing of the result. An example would look like: List<Bson> aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist", group("USDscientist", sum("count", 1))); from("direct:aggregate") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant(aggregate) .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=MongoIterable") .split(body()) .streaming() .to("mock:resultAggregate"); Note that calling .split(body()) is enough to send the entries down the route one-by-one, however it would still load all the entries into memory first. Calling .streaming() is thus required to load data into memory by batches. 88.8.5.2. getDbStats Equivalent of running the db.stats() command in the MongoDB shell, which displays useful statistic figures about the database. For example: Usage example: // from("direct:getDbStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getDbStats"); Object result = template.requestBody("direct:getDbStats", "irrelevantBody"); assertTrue("Result is not of type Document", result instanceof Document); The operation will return a data structure similar to the one displayed in the shell, in the form of a Document in the OUT message body. 88.8.5.3. getColStats Equivalent of running the db.collection.stats() command in the MongoDB shell, which displays useful statistic figures about the collection. For example: Usage example: // from("direct:getColStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getColStats"); Object result = template.requestBody("direct:getColStats", "irrelevantBody"); assertTrue("Result is not of type Document", result instanceof Document); The operation will return a data structure similar to the one displayed in the shell, in the form of a Document in the OUT message body. 88.8.5.4. command Run the body as a command on database. Useful for admin operation as getting host information, replication or sharding status. Collection parameter is not use for this operation. // route: from("command").to("mongodb:myDb?database=science&operation=command"); DBObject commandBody = new BasicDBObject("hostInfo", "1"); Object result = template.requestBody("direct:command", commandBody); 88.8.6. Dynamic operations An Exchange can override the endpoint's fixed operation by setting the CamelMongoDbOperation header, defined by the MongoDbConstants.OPERATION_HEADER constant. The values supported are determined by the MongoDbOperation enumeration and match the accepted values for the operation parameter on the endpoint URI. For example: // from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); Object result = template.requestBodyAndHeader("direct:insert", "irrelevantBody", MongoDbConstants.OPERATION_HEADER, "count"); assertTrue("Result is not of type Long", result instanceof Long); 88.9. Consumers There are several types of consumers: Tailable Cursor Consumer Change Streams Consumer 88.9.1. Tailable Cursor Consumer MongoDB offers a mechanism to instantaneously consume ongoing data from a collection, by keeping the cursor open just like the tail -f command of *nix systems. This mechanism is significantly more efficient than a scheduled poll, due to the fact that the server pushes new data to the client as it becomes available, rather than making the client ping back at scheduled intervals to fetch new data. It also reduces otherwise redundant network traffic. There is only one requisite to use tailable cursors: the collection must be a "capped collection", meaning that it will only hold N objects, and when the limit is reached, MongoDB flushes old objects in the same order they were originally inserted. For more information, please refer to http://www.mongodb.org/display/DOCS/Tailable+Cursors . The Camel MongoDB component implements a tailable cursor consumer, making this feature available for you to use in your Camel routes. As new objects are inserted, MongoDB will push them as Document in natural order to your tailable cursor consumer, who will transform them to an Exchange and will trigger your route logic. 88.10. How the tailable cursor consumer works To turn a cursor into a tailable cursor, a few special flags are to be signalled to MongoDB when first generating the cursor. Once created, the cursor will then stay open and will block upon calling the MongoCursor.() method until new data arrives. However, the MongoDB server reserves itself the right to kill your cursor if new data doesn't appear after an indeterminate period. If you are interested to continue consuming new data, you have to regenerate the cursor. And to do so, you will have to remember the position where you left off or else you will start consuming from the top again. The Camel MongoDB tailable cursor consumer takes care of all these tasks for you. You will just need to provide the key to some field in your data of increasing nature, which will act as a marker to position your cursor every time it is regenerated, e.g. a timestamp, a sequential ID, etc. It can be of any datatype supported by MongoDB. Date, Strings and Integers are found to work well. We call this mechanism "tail tracking" in the context of this component. The consumer will remember the last value of this field and whenever the cursor is to be regenerated, it will run the query with a filter like: increasingField > lastValue , so that only unread data is consumed. Setting the increasing field: Set the key of the increasing field on the endpoint URI tailTrackingIncreasingField option. In Camel 2.10, it must be a top-level field in your data, as nested navigation for this field is not yet supported. That is, the "timestamp" field is okay, but "nested.timestamp" will not work. Please open a ticket in the Camel JIRA if you do require support for nested increasing fields. Cursor regeneration delay: One thing to note is that if new data is not already available upon initialisation, MongoDB will kill the cursor instantly. Since we don't want to overwhelm the server in this case, a cursorRegenerationDelay option has been introduced (with a default value of 1000ms.), which you can modify to suit your needs. An example: from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime") .id("tailableCursorConsumer1") .autoStartup(false) .to("mock:test"); The above route will consume from the "flights.cancellations" capped collection, using "departureTime" as the increasing field, with a default regeneration cursor delay of 1000ms. 88.11. Persistent tail tracking Standard tail tracking is volatile and the last value is only kept in memory. However, in practice you will need to restart your Camel container every now and then, but your last value would then be lost and your tailable cursor consumer would start consuming from the top again, very likely sending duplicate records into your route. To overcome this situation, you can enable the persistent tail tracking feature to keep track of the last consumed increasing value in a special collection inside your MongoDB database too. When the consumer initialises again, it will restore the last tracked value and continue as if nothing happened. The last read value is persisted on two occasions: every time the cursor is regenerated and when the consumer shuts down. We may consider persisting at regular intervals too in the future (flush every 5 seconds) for added robustness if the demand is there. To request this feature, please open a ticket in the Camel JIRA. 88.12. Enabling persistent tail tracking To enable this function, set at least the following options on the endpoint URI: persistentTailTracking option to true persistentId option to a unique identifier for this consumer, so that the same collection can be reused across many consumers Additionally, you can set the tailTrackDb , tailTrackCollection and tailTrackField options to customise where the runtime information will be stored. Refer to the endpoint options table at the top of this page for descriptions of each option. For example, the following route will consume from the "flights.cancellations" capped collection, using "departureTime" as the increasing field, with a default regeneration cursor delay of 1000ms, with persistent tail tracking turned on, and persisting under the "cancellationsTracker" id on the "flights.camelTailTracking", storing the last processed value under the "lastTrackingValue" field ( camelTailTracking and lastTrackingValue are defaults). from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + "&persistentId=cancellationsTracker") .id("tailableCursorConsumer2") .autoStartup(false) .to("mock:test"); Below is another example identical to the one above, but where the persistent tail tracking runtime information will be stored under the "trackers.camelTrackers" collection, in the "lastProcessedDepartureTime" field: from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + "&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers" + "&tailTrackField=lastProcessedDepartureTime") .id("tailableCursorConsumer3") .autoStartup(false) .to("mock:test"); 88.12.1. Change Streams Consumer Change Streams allow applications to access real-time data changes without the complexity and risk of tailing the MongoDB oplog. Applications can use change streams to subscribe to all data changes on a collection and immediately react to them. Because change streams use the aggregation framework, applications can also filter for specific changes or transform the notifications at will. The exchange body will contain the full document of any change. To configure Change Streams Consumer you need to specify consumerType , database , collection and optional JSON property streamFilter to filter events. That JSON property is standard MongoDB USDmatch aggregation. It could be easily specified using XML DSL configuration: <route id="filterConsumer"> <from uri="mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }"/> <to uri="mock:test"/> </route> Java configuration: from("mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }") .to("mock:test"); Note You can externalize the streamFilter value into a property placeholder which allows the endpoint URI parameters to be cleaner and easier to read. The changeStreams consumer type will also return the following OUT headers: Header key Quick constant Description (extracted from MongoDB API doc) Data type CamelMongoDbStreamOperationType MongoDbConstants.STREAM_OPERATION_TYPE The type of operation that occurred. Can be any of the following values: insert, delete, replace, update, drop, rename, dropDatabase, invalidate. String _id MongoDbConstants.MONGO_ID A document that contains the _id of the document created or modified by the insert, replace, delete, update operations (i.e. CRUD operations). For sharded collections, also displays the full shard key for the document. The _id field is not repeated if it is already a part of the shard key. ObjectId 88.13. Type conversions The MongoDbBasicConverters type converter included with the camel-mongodb component provides the following conversions: Name From type To type How? fromMapToDocument Map Document constructs a new Document via the new Document(Map m) constructor. fromDocumentToMap Document Map Document already implements Map . fromStringToDocument String Document uses com.mongodb.Document.parse(String s) . fromStringToObjectId String ObjectId constructs a new ObjectId via the new ObjectId(s) fromFileToDocument File Document uses fromInputStreamToDocument under the hood fromInputStreamToDocument InputStream Document converts the inputstream bytes to a Document fromStringToList String List<Bson> uses org.bson.codecs.configuration.CodecRegistries to convert to BsonArray then to List<Bson>. This type converter is auto-discovered, so you don't need to configure anything manually. 88.14. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.mongodb.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mongodb.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.mongodb.enabled Whether to enable auto configuration of the mongodb component. This is enabled by default. Boolean camel.component.mongodb.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mongodb.mongo-connection Shared client used for connection. All endpoints generated from the component will share this connection client. The option is a com.mongodb.client.MongoClient type. MongoClient | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mongodb-starter</artifactId> </dependency>",
"mongodb:connectionBean?database=databaseName&collection=collectionName&operation=operationName[&moreOptions...]",
"mongodb:connectionBean",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:context=\"http://www.springframework.org/schema/context\" xmlns:mongo=\"http://www.springframework.org/schema/data/mongo\" xsi:schemaLocation=\"http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd\"> <mongo:mongo-client id=\"mongoBean\" host=\"USD{mongo.url}\" port=\"USD{mongo.port}\" credentials=\"USD{mongo.user}:USD{mongo.pass}@USD{mongo.dbname}\"> <mongo:client-options write-concern=\"NORMAL\" /> </mongo:mongo-client> </beans>",
"<route> <from uri=\"direct:start\" /> <!-- using bean 'mongoBean' defined above --> <to uri=\"mongodb:mongoBean?database=USD{mongodb.database}&collection=USD{mongodb.collection}&operation=getDbStats\" /> <to uri=\"direct:result\" /> </route>",
"from(\"direct:findById\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findById\") .to(\"mock:resultFindById\");",
"from(\"direct:findById\") .convertBodyTo(ObjectId.class) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findById\") .to(\"mock:resultFindById\");",
"from(\"direct:findOneByQuery\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery\") .to(\"mock:resultFindOneByQuery\");",
"from(\"direct:findOneByQuery\") .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq(\"name\", \"Raul Kripalani\"))) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery\") .to(\"mock:resultFindOneByQuery\");",
"from(\"direct:findAll\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") .to(\"mock:resultFindAll\");",
"from(\"direct:findAll\") .setHeader(MongoDbConstants.CRITERIA, Filters.eq(\"name\", \"Raul Kripalani\")) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") .to(\"mock:resultFindAll\");",
"from(\"direct:findAll\") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq(\"name\", \"Raul Kripalani\"))) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll&outputType=MongoIterable\") .to(\"mock:resultFindAll\");",
"// from(\"direct:count\").to(\"mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true\"); Long result = template.requestBodyAndHeader(\"direct:count\", \"irrelevantBody\", MongoDbConstants.COLLECTION, \"dynamicCollectionName\"); assertTrue(\"Result is not of type Long\", result instanceof Long);",
"Document query = Long count = template.requestBodyAndHeader(\"direct:count\", query, MongoDbConstants.COLLECTION, \"dynamicCollectionName\");",
"// route: from(\"direct:findAll\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") Bson fieldProjection = Projection.exclude(\"_id\", \"boringField\"); Object result = template.requestBodyAndHeader(\"direct:findAll\", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection);",
"// route: from(\"direct:findAll\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") Bson fieldProjection = Projection.exclude(\"_id\", \"boringField\"); Object result = template.requestBodyAndHeader(\"direct:findAll\", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection);",
"// route: from(\"direct:findAll\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") Bson sorts = Sorts.descending(\"_id\"); Object result = template.requestBodyAndHeader(\"direct:findAll\", ObjectUtils.NULL, MongoDbConstants.SORT_BY, sorts);",
".from(\"direct:someTriggeringEvent\") .setHeader(MongoDbConstants.SORT_BY).constant(Sorts.descending(\"documentTimestamp\")) .setHeader(MongoDbConstants.FIELDS_PROJECTION).constant(Projection.include(\"documentTimestamp\")) .setBody().constant(\"{}\") .to(\"mongodb:myDb?database=local&collection=myDemoCollection&operation=findOneByQuery\") .to(\"direct:aMyBatisParameterizedSelect\");",
"from(\"direct:insert\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=insert\");",
"from(\"direct:insert\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=save\");",
"// route: from(\"direct:insert\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=save\"); org.bson.Document docForSave = new org.bson.Document(); docForSave.put(\"key\", \"value\"); Object result = template.requestBody(\"direct:insert\", docForSave);",
"// route: from(\"direct:update\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=update\"); List<Bson> body = new ArrayList<>(); Bson filterField = Filters.eq(\"filterField\", true); body.add(filterField); BsonDocument updateObj = new BsonDocument().append(\"USDset\", new BsonDocument(\"scientist\", new BsonString(\"Darwin\"))); body.add(updateObj); Object result = template.requestBodyAndHeader(\"direct:update\", body, MongoDbConstants.MULTIUPDATE, true);",
"// route: from(\"direct:update\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=update\"); Maps<String, Object> headers = new HashMap<>(2); headers.add(MongoDbConstants.MULTIUPDATE, true); headers.add(MongoDbConstants.FIELDS_FILTER, Filters.eq(\"filterField\", true)); String updateObj = Updates.set(\"scientist\", \"Darwin\");; Object result = template.requestBodyAndHeaders(\"direct:update\", updateObj, headers);",
"// route: from(\"direct:update\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=update\"); String updateObj = \"[{\\\"filterField\\\": true}, {\\\"USDset\\\", {\\\"scientist\\\", \\\"Darwin\\\"}}]\"; Object result = template.requestBodyAndHeader(\"direct:update\", updateObj, MongoDbConstants.MULTIUPDATE, true);",
"// route: from(\"direct:remove\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=remove\"); Bson conditionField = Filters.eq(\"conditionField\", true); Object result = template.requestBody(\"direct:remove\", conditionField);",
"// route: from(\"direct:bulkWrite\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=bulkWrite\"); List<WriteModel<Document>> bulkOperations = Arrays.asList( new InsertOneModel<>(new Document(\"scientist\", \"Pierre Curie\")), new UpdateOneModel<>(new Document(\"_id\", \"5\"), new Document(\"USDset\", new Document(\"scientist\", \"Marie Curie\"))), new DeleteOneModel<>(new Document(\"_id\", \"3\"))); BulkWriteResult result = template.requestBody(\"direct:bulkWrite\", bulkOperations, BulkWriteResult.class);",
"// route: from(\"direct:aggregate\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate\"); List<Bson> aggregate = Arrays.asList(match(or(eq(\"scientist\", \"Darwin\"), eq(\"scientist\", group(\"USDscientist\", sum(\"count\", 1))); from(\"direct:aggregate\") .setBody().constant(aggregate) .to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate\") .to(\"mock:resultAggregate\");",
"List<Bson> aggregate = Arrays.asList(match(or(eq(\"scientist\", \"Darwin\"), eq(\"scientist\", group(\"USDscientist\", sum(\"count\", 1))); from(\"direct:aggregate\") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant(aggregate) .to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=MongoIterable\") .split(body()) .streaming() .to(\"mock:resultAggregate\");",
"> db.stats(); { \"db\" : \"test\", \"collections\" : 7, \"objects\" : 719, \"avgObjSize\" : 59.73296244784423, \"dataSize\" : 42948, \"storageSize\" : 1000058880, \"numExtents\" : 9, \"indexes\" : 4, \"indexSize\" : 32704, \"fileSize\" : 1275068416, \"nsSizeMB\" : 16, \"ok\" : 1 }",
"// from(\"direct:getDbStats\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=getDbStats\"); Object result = template.requestBody(\"direct:getDbStats\", \"irrelevantBody\"); assertTrue(\"Result is not of type Document\", result instanceof Document);",
"> db.camelTest.stats(); { \"ns\" : \"test.camelTest\", \"count\" : 100, \"size\" : 5792, \"avgObjSize\" : 57.92, \"storageSize\" : 20480, \"numExtents\" : 2, \"nindexes\" : 1, \"lastExtentSize\" : 16384, \"paddingFactor\" : 1, \"flags\" : 1, \"totalIndexSize\" : 8176, \"indexSizes\" : { \"_id_\" : 8176 }, \"ok\" : 1 }",
"// from(\"direct:getColStats\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=getColStats\"); Object result = template.requestBody(\"direct:getColStats\", \"irrelevantBody\"); assertTrue(\"Result is not of type Document\", result instanceof Document);",
"// route: from(\"command\").to(\"mongodb:myDb?database=science&operation=command\"); DBObject commandBody = new BasicDBObject(\"hostInfo\", \"1\"); Object result = template.requestBody(\"direct:command\", commandBody);",
"// from(\"direct:insert\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=insert\"); Object result = template.requestBodyAndHeader(\"direct:insert\", \"irrelevantBody\", MongoDbConstants.OPERATION_HEADER, \"count\"); assertTrue(\"Result is not of type Long\", result instanceof Long);",
"from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime\") .id(\"tailableCursorConsumer1\") .autoStartup(false) .to(\"mock:test\");",
"from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true\" + \"&persistentId=cancellationsTracker\") .id(\"tailableCursorConsumer2\") .autoStartup(false) .to(\"mock:test\");",
"from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true\" + \"&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers\" + \"&tailTrackField=lastProcessedDepartureTime\") .id(\"tailableCursorConsumer3\") .autoStartup(false) .to(\"mock:test\");",
"<route id=\"filterConsumer\"> <from uri=\"mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }\"/> <to uri=\"mock:test\"/> </route>",
"from(\"mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }\") .to(\"mock:test\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mongodb-component-starter |
Chapter 10. baremetal | Chapter 10. baremetal This chapter describes the commands under the baremetal command. 10.1. baremetal allocation create Create a new baremetal allocation. Usage: Table 10.1. Command arguments Value Summary -h, --help Show this help message and exit --resource-class RESOURCE_CLASS Resource class to request. --trait TRAITS A trait to request. can be specified multiple times. --candidate-node CANDIDATE_NODES A candidate node for this allocation. can be specified multiple times. If at least one is specified, only the provided candidate nodes are considered for the allocation. --name NAME Unique name of the allocation. --uuid UUID Uuid of the allocation. --owner OWNER Owner of the allocation. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. --wait [<time-out>] Wait for the new allocation to become active. an error is returned if allocation fails and --wait is used. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. --node NODE Backfill this allocation from the provided node that has already been deployed. Bypasses the normal allocation process. Table 10.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.2. baremetal allocation delete Unregister baremetal allocation(s). Usage: Table 10.6. Positional arguments Value Summary <allocation> Allocations(s) to delete (name or uuid). Table 10.7. Command arguments Value Summary -h, --help Show this help message and exit 10.3. baremetal allocation list List baremetal allocations. Usage: Table 10.8. Command arguments Value Summary -h, --help Show this help message and exit --limit <limit> Maximum number of allocations to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <allocation> Allocation uuid (for example, of the last allocation in the list from a request). Returns the list of allocations after this UUID. --sort <key>[:<direction>] Sort output by specified allocation fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --node <node> Only list allocations of this node (name or uuid). --resource-class <resource_class> Only list allocations with this resource class. --state <state> Only list allocations in this state. --owner <owner> Only list allocations with this owner. --long Show detailed information about the allocations. --fields <field> [<field> ... ] One or more allocation fields. only these fields will be fetched from the server. Can not be used when -- long is specified. Table 10.9. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.10. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.4. baremetal allocation set Set baremetal allocation properties. Usage: Table 10.13. Positional arguments Value Summary <allocation> Name or uuid of the allocation Table 10.14. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the allocation --extra <key=value> Extra property to set on this allocation (repeat option to set multiple extra properties) 10.5. baremetal allocation show Show baremetal allocation details. Usage: Table 10.15. Positional arguments Value Summary <id> Uuid or name of the allocation Table 10.16. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more allocation fields. only these fields will be fetched from the server. Table 10.17. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.18. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.19. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.20. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.6. baremetal allocation unset Unset baremetal allocation properties. Usage: Table 10.21. Positional arguments Value Summary <allocation> Name or uuid of the allocation Table 10.22. Command arguments Value Summary -h, --help Show this help message and exit --name Unset the name of the allocation --extra <key> Extra property to unset on this baremetal allocation (repeat option to unset multiple extra property). 10.7. baremetal chassis create Create a new chassis. Usage: Table 10.23. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description for the chassis --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. --uuid <uuid> Unique uuid of the chassis Table 10.24. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.26. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.8. baremetal chassis delete Delete a chassis. Usage: Table 10.28. Positional arguments Value Summary <chassis> Uuids of chassis to delete Table 10.29. Command arguments Value Summary -h, --help Show this help message and exit 10.9. baremetal chassis list List the chassis. Usage: Table 10.30. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more chassis fields. only these fields will be fetched from the server. Cannot be used when --long is specified. --limit <limit> Maximum number of chassis to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --long Show detailed information about the chassis --marker <chassis> Chassis uuid (for example, of the last chassis in the list from a request). Returns the list of chassis after this UUID. --sort <key>[:<direction>] Sort output by specified chassis fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. Table 10.31. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.32. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.33. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.10. baremetal chassis set Set chassis properties. Usage: Table 10.35. Positional arguments Value Summary <chassis> Uuid of the chassis Table 10.36. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Set the description of the chassis --extra <key=value> Extra to set on this chassis (repeat option to set multiple extras) 10.11. baremetal chassis show Show chassis details. Usage: Table 10.37. Positional arguments Value Summary <chassis> Uuid of the chassis Table 10.38. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more chassis fields. only these fields will be fetched from the server. Table 10.39. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.40. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.41. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.42. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.12. baremetal chassis unset Unset chassis properties. Usage: Table 10.43. Positional arguments Value Summary <chassis> Uuid of the chassis Table 10.44. Command arguments Value Summary -h, --help Show this help message and exit --description Clear the chassis description --extra <key> Extra to unset on this chassis (repeat option to unset multiple extras) 10.13. baremetal conductor list List baremetal conductors Usage: Table 10.45. Command arguments Value Summary -h, --help Show this help message and exit --limit <limit> Maximum number of conductors to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <conductor> Hostname of the conductor (for example, of the last conductor in the list from a request). Returns the list of conductors after this conductor. --sort <key>[:<direction>] Sort output by specified conductor fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --long Show detailed information about the conductors. --fields <field> [<field> ... ] One or more conductor fields. only these fields will be fetched from the server. Can not be used when -- long is specified. Table 10.46. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.47. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.48. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.49. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.14. baremetal conductor show Show baremetal conductor details Usage: Table 10.50. Positional arguments Value Summary <conductor> Hostname of the conductor Table 10.51. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more conductor fields. only these fields will be fetched from the server. Table 10.52. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.53. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.54. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.55. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.15. baremetal create Create resources from files Usage: Table 10.56. Positional arguments Value Summary <file> File (.yaml or .json) containing descriptions of the resources to create. Can be specified multiple times. Table 10.57. Command arguments Value Summary -h, --help Show this help message and exit 10.16. baremetal deploy template create Create a new deploy template Usage: Table 10.58. Positional arguments Value Summary <name> Unique name for this deploy template. must be a valid trait name Table 10.59. Command arguments Value Summary -h, --help Show this help message and exit --uuid <uuid> Uuid of the deploy template. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. --steps <steps> The deploy steps. may be the path to a yaml file containing the deploy steps; OR - , with the deploy steps being read from standard input; OR a JSON string. The value should be a list of deploy-step dictionaries; each dictionary should have keys interface , step , args and priority . Table 10.60. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.61. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.62. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.63. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.17. baremetal deploy template delete Delete deploy template(s). Usage: Table 10.64. Positional arguments Value Summary <template> Name(s) or uuid(s) of the deploy template(s) to delete. Table 10.65. Command arguments Value Summary -h, --help Show this help message and exit 10.18. baremetal deploy template list List baremetal deploy templates. Usage: Table 10.66. Command arguments Value Summary -h, --help Show this help message and exit --limit <limit> Maximum number of deploy templates to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <template> Deploytemplate uuid (for example, of the last deploy template in the list from a request). Returns the list of deploy templates after this UUID. --sort <key>[:<direction>] Sort output by specified deploy template fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --long Show detailed information about deploy templates. --fields <field> [<field> ... ] One or more deploy template fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 10.67. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.68. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.69. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.70. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.19. baremetal deploy template set Set baremetal deploy template properties. Usage: Table 10.71. Positional arguments Value Summary <template> Name or uuid of the deploy template Table 10.72. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set unique name of the deploy template. must be a valid trait name. --steps <steps> The deploy steps. may be the path to a yaml file containing the deploy steps; OR - , with the deploy steps being read from standard input; OR a JSON string. The value should be a list of deploy-step dictionaries; each dictionary should have keys interface , step , args and priority . --extra <key=value> Extra to set on this baremetal deploy template (repeat option to set multiple extras). 10.20. baremetal deploy template show Show baremetal deploy template details. Usage: Table 10.73. Positional arguments Value Summary <template> Name or uuid of the deploy template. Table 10.74. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more deploy template fields. only these fields will be fetched from the server. Table 10.75. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.76. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.77. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.78. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.21. baremetal deploy template unset Unset baremetal deploy template properties. Usage: Table 10.79. Positional arguments Value Summary <template> Name or uuid of the deploy template Table 10.80. Command arguments Value Summary -h, --help Show this help message and exit --extra <key> Extra to unset on this baremetal deploy template (repeat option to unset multiple extras). 10.22. baremetal driver list List the enabled drivers. Usage: Table 10.81. Command arguments Value Summary -h, --help Show this help message and exit --type <type> Type of driver ("classic" or "dynamic"). the default is to list all of them. --long Show detailed information about the drivers. --fields <field> [<field> ... ] One or more node fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 10.82. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.83. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.84. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.85. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.23. baremetal driver passthru call Call a vendor passthru method for a driver. Usage: Table 10.86. Positional arguments Value Summary <driver> Name of the driver. <method> Vendor passthru method to be called. Table 10.87. Command arguments Value Summary -h, --help Show this help message and exit --arg <key=value> Argument to pass to the passthru method (repeat option to specify multiple arguments). --http-method <http-method> The http method to use in the passthru request. one of DELETE, GET, PATCH, POST, PUT. Defaults to POST . Table 10.88. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.89. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.90. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.91. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.24. baremetal driver passthru list List available vendor passthru methods for a driver. Usage: Table 10.92. Positional arguments Value Summary <driver> Name of the driver. Table 10.93. Command arguments Value Summary -h, --help Show this help message and exit Table 10.94. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.95. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.96. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.97. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.25. baremetal driver property list List the driver properties. Usage: Table 10.98. Positional arguments Value Summary <driver> Name of the driver. Table 10.99. Command arguments Value Summary -h, --help Show this help message and exit Table 10.100. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.101. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.102. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.103. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.26. baremetal driver raid property list List a driver's RAID logical disk properties. Usage: Table 10.104. Positional arguments Value Summary <driver> Name of the driver. Table 10.105. Command arguments Value Summary -h, --help Show this help message and exit Table 10.106. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.107. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.108. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.27. baremetal driver show Show information about a driver. Usage: Table 10.110. Positional arguments Value Summary <driver> Name of the driver. Table 10.111. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more node fields. only these fields will be fetched from the server. Table 10.112. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.113. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.114. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.115. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.28. baremetal node abort Set provision state of baremetal node to abort Usage: Table 10.116. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.117. Command arguments Value Summary -h, --help Show this help message and exit 10.29. baremetal node add trait Add traits to a node. Usage: Table 10.118. Positional arguments Value Summary <node> Name or uuid of the node <trait> Trait(s) to add Table 10.119. Command arguments Value Summary -h, --help Show this help message and exit 10.30. baremetal node adopt Set provision state of baremetal node to adopt Usage: Table 10.120. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.121. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, active. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 10.31. baremetal node bios setting list List a node's BIOS settings. Usage: Table 10.122. Positional arguments Value Summary <node> Name or uuid of the node Table 10.123. Command arguments Value Summary -h, --help Show this help message and exit --long Show detailed information about the bios settings. --fields <field> [<field> ... ] One or more node fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 10.124. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.125. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.126. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.127. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.32. baremetal node bios setting show Show a specific BIOS setting for a node. Usage: Table 10.128. Positional arguments Value Summary <node> Name or uuid of the node <setting name> Setting name to show Table 10.129. Command arguments Value Summary -h, --help Show this help message and exit Table 10.130. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.131. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.132. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.133. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.33. baremetal node boot device set Set the boot device for a node Usage: Table 10.134. Positional arguments Value Summary <node> Name or uuid of the node <device> One of bios, cdrom, disk, pxe, safe, wanboot Table 10.135. Command arguments Value Summary -h, --help Show this help message and exit --persistent Make changes persistent for all future boots 10.34. baremetal node boot device show Show the boot device information for a node Usage: Table 10.136. Positional arguments Value Summary <node> Name or uuid of the node Table 10.137. Command arguments Value Summary -h, --help Show this help message and exit --supported Show the supported boot devices Table 10.138. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.139. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.140. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.141. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.35. baremetal node boot mode set Set the boot mode for the baremetal node deployment Usage: Table 10.142. Positional arguments Value Summary <node> Name or uuid of the node. <boot_mode> The boot mode to set for node (uefi/bios) Table 10.143. Command arguments Value Summary -h, --help Show this help message and exit 10.36. baremetal node clean Set provision state of baremetal node to clean Usage: Table 10.144. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.145. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, manageable. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. --clean-steps <clean-steps> The clean steps. may be the path to a yaml file containing the clean steps; OR - , with the clean steps being read from standard input; OR a JSON string. The value should be a list of clean-step dictionaries; each dictionary should have keys interface and step , and optional key args . 10.37. baremetal node console disable Disable console access for a node Usage: Table 10.146. Positional arguments Value Summary <node> Name or uuid of the node Table 10.147. Command arguments Value Summary -h, --help Show this help message and exit 10.38. baremetal node console enable Enable console access for a node Usage: Table 10.148. Positional arguments Value Summary <node> Name or uuid of the node Table 10.149. Command arguments Value Summary -h, --help Show this help message and exit 10.39. baremetal node console show Show console information for a node Usage: Table 10.150. Positional arguments Value Summary <node> Name or uuid of the node Table 10.151. Command arguments Value Summary -h, --help Show this help message and exit Table 10.152. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.153. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.154. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.155. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.40. baremetal node create Register a new node with the baremetal service Usage: Table 10.156. Command arguments Value Summary -h, --help Show this help message and exit --chassis-uuid <chassis> Uuid of the chassis that this node belongs to. --driver <driver> Driver used to control the node [required]. --driver-info <key=value> Key/value pair used by the driver, such as out-of-band management credentials. Can be specified multiple times. --property <key=value> Key/value pair describing the physical characteristics of the node. This is exported to Nova and used by the scheduler. Can be specified multiple times. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. --uuid <uuid> Unique uuid for the node. --name <name> Unique name for the node. --bios-interface <bios_interface> Bios interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --boot-interface <boot_interface> Boot interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --console-interface <console_interface> Console interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --deploy-interface <deploy_interface> Deploy interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --inspect-interface <inspect_interface> Inspect interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --management-interface <management_interface> Management interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --network-data <network data> Json string or a yaml file or - for stdin to read static network configuration for the baremetal node associated with this ironic node. Format of this file should comply with Nova network data metadata (network_data.json). Depending on ironic boot interface capabilities being used, network configuration may or may not been served to the node for offline network configuration. --network-interface <network_interface> Network interface used for switching node to cleaning/provisioning networks. --power-interface <power_interface> Power interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --raid-interface <raid_interface> Raid interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --rescue-interface <rescue_interface> Rescue interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --storage-interface <storage_interface> Storage interface used by the node's driver. --vendor-interface <vendor_interface> Vendor interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --resource-class <resource_class> Resource class for mapping nodes to nova flavors --conductor-group <conductor_group> Conductor group the node will belong to --automated-clean Enable automated cleaning for the node --no-automated-clean Explicitly disable automated cleaning for the node --owner <owner> Owner of the node. --lessee <lessee> Lessee of the node. --description <description> Description for the node. Table 10.157. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.158. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.159. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.160. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.41. baremetal node delete Unregister baremetal node(s) Usage: Table 10.161. Positional arguments Value Summary <node> Node(s) to delete (name or uuid) Table 10.162. Command arguments Value Summary -h, --help Show this help message and exit 10.42. baremetal node deploy Set provision state of baremetal node to deploy Usage: Table 10.163. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.164. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, active. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. --config-drive <config-drive> A gzipped, base64-encoded configuration drive string OR the path to the configuration drive file OR the path to a directory containing the config drive files OR a JSON object to build config drive from. In case it's a directory, a config drive will be generated from it. In case it's a JSON object with optional keys meta_data , user_data and network_data , a config drive will be generated on the server side (see the bare metal API reference for more details). --deploy-steps <deploy-steps> The deploy steps. may be the path to a yaml file containing the deploy steps; OR - , with the deploy steps being read from standard input; OR a JSON string. The value should be a list of deploy-step dictionaries; each dictionary should have keys interface and step , and optional key args . 10.43. baremetal node history get Get history event for a baremetal node. Usage: Table 10.165. Positional arguments Value Summary <node> Name or uuid of the node. <event> Uuid of the event. Table 10.166. Command arguments Value Summary -h, --help Show this help message and exit Table 10.167. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.168. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.169. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.170. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.44. baremetal node history list Get history events for a baremetal node. Usage: Table 10.171. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.172. Command arguments Value Summary -h, --help Show this help message and exit --long Show detailed information about the bios settings. Table 10.173. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.174. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.175. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.176. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.45. baremetal node inject nmi Inject NMI to baremetal node Usage: Table 10.177. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.178. Command arguments Value Summary -h, --help Show this help message and exit 10.46. baremetal node inspect Set provision state of baremetal node to inspect Usage: Table 10.179. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.180. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, manageable. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 10.47. baremetal node list List baremetal nodes Usage: Table 10.181. Command arguments Value Summary -h, --help Show this help message and exit --limit <limit> Maximum number of nodes to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <node> Node uuid (for example, of the last node in the list from a request). Returns the list of nodes after this UUID. --sort <key>[:<direction>] Sort output by specified node fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --maintenance Limit list to nodes in maintenance mode --no-maintenance Limit list to nodes not in maintenance mode --retired Limit list to retired nodes. --no-retired Limit list to not retired nodes. --fault <fault> List nodes in specified fault. --associated List only nodes associated with an instance. --unassociated List only nodes not associated with an instance. --provision-state <provision state> List nodes in specified provision state. --driver <driver> Limit list to nodes with driver <driver> --resource-class <resource class> Limit list to nodes with resource class <resource class> --conductor-group <conductor_group> Limit list to nodes with conductor group <conductor group> --conductor <conductor> Limit list to nodes with conductor <conductor> --chassis <chassis UUID> Limit list to nodes of this chassis --owner <owner> Limit list to nodes with owner <owner> --lessee <lessee> Limit list to nodes with lessee <lessee> --description-contains <description_contains> Limit list to nodes with description contains <description_contains> --long Show detailed information about the nodes. --fields <field> [<field> ... ] One or more node fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 10.182. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.183. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.184. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.185. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.48. baremetal node maintenance set Set baremetal node to maintenance mode Usage: Table 10.186. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.187. Command arguments Value Summary -h, --help Show this help message and exit --reason <reason> Reason for setting maintenance mode. 10.49. baremetal node maintenance unset Unset baremetal node from maintenance mode Usage: Table 10.188. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.189. Command arguments Value Summary -h, --help Show this help message and exit 10.50. baremetal node manage Set provision state of baremetal node to manage Usage: Table 10.190. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.191. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, manageable. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 10.51. baremetal node passthru call Call a vendor passthu method for a node Usage: Table 10.192. Positional arguments Value Summary <node> Name or uuid of the node <method> Vendor passthru method to be executed Table 10.193. Command arguments Value Summary -h, --help Show this help message and exit --arg <key=value> Argument to pass to the passthru method (repeat option to specify multiple arguments) --http-method <http-method> The http method to use in the passthru request. one of DELETE, GET, PATCH, POST, PUT. Defaults to POST. 10.52. baremetal node passthru list List vendor passthru methods for a node Usage: Table 10.194. Positional arguments Value Summary <node> Name or uuid of the node Table 10.195. Command arguments Value Summary -h, --help Show this help message and exit Table 10.196. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.197. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.198. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.199. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.53. baremetal node power off Power off a node Usage: Table 10.200. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.201. Command arguments Value Summary -h, --help Show this help message and exit --power-timeout <power-timeout> Timeout (in seconds, positive integer) to wait for the target power state before erroring out. --soft Request graceful power-off. 10.54. baremetal node power on Power on a node Usage: Table 10.202. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.203. Command arguments Value Summary -h, --help Show this help message and exit --power-timeout <power-timeout> Timeout (in seconds, positive integer) to wait for the target power state before erroring out. 10.55. baremetal node provide Set provision state of baremetal node to provide Usage: Table 10.204. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.205. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, available. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 10.56. baremetal node reboot Reboot baremetal node Usage: Table 10.206. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.207. Command arguments Value Summary -h, --help Show this help message and exit --soft Request graceful reboot. --power-timeout <power-timeout> Timeout (in seconds, positive integer) to wait for the target power state before erroring out. 10.57. baremetal node rebuild Set provision state of baremetal node to rebuild Usage: Table 10.208. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.209. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, active. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. --config-drive <config-drive> A gzipped, base64-encoded configuration drive string OR the path to the configuration drive file OR the path to a directory containing the config drive files OR a JSON object to build config drive from. In case it's a directory, a config drive will be generated from it. In case it's a JSON object with optional keys meta_data , user_data and network_data , a config drive will be generated on the server side (see the bare metal API reference for more details). --deploy-steps <deploy-steps> The deploy steps in json format. may be the path to a file containing the deploy steps; OR - , with the deploy steps being read from standard input; OR a string. The value should be a list of deploy-step dictionaries; each dictionary should have keys interface , step , priority and optional key args . 10.58. baremetal node remove trait Remove trait(s) from a node. Usage: Table 10.210. Positional arguments Value Summary <node> Name or uuid of the node <trait> Trait(s) to remove Table 10.211. Command arguments Value Summary -h, --help Show this help message and exit --all Remove all traits 10.59. baremetal node rescue Set provision state of baremetal node to rescue Usage: Table 10.212. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.213. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, rescue. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. --rescue-password <rescue-password> The password that will be used to login to the rescue ramdisk. The value should be a non-empty string. 10.60. baremetal node secure boot off Turn secure boot off Usage: Table 10.214. Positional arguments Value Summary <node> Name or uuid of the node Table 10.215. Command arguments Value Summary -h, --help Show this help message and exit 10.61. baremetal node secure boot on Turn secure boot on Usage: Table 10.216. Positional arguments Value Summary <node> Name or uuid of the node Table 10.217. Command arguments Value Summary -h, --help Show this help message and exit 10.62. baremetal node set Set baremetal properties Usage: Table 10.218. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.219. Command arguments Value Summary -h, --help Show this help message and exit --instance-uuid <uuid> Set instance uuid of node to <uuid> --name <name> Set the name of the node --chassis-uuid <chassis UUID> Set the chassis for the node --driver <driver> Set the driver for the node --bios-interface <bios_interface> Set the bios interface for the node --reset-bios-interface Reset the bios interface to its hardware type default --boot-interface <boot_interface> Set the boot interface for the node --reset-boot-interface Reset the boot interface to its hardware type default --console-interface <console_interface> Set the console interface for the node --reset-console-interface Reset the console interface to its hardware type default --deploy-interface <deploy_interface> Set the deploy interface for the node --reset-deploy-interface Reset the deploy interface to its hardware type default --inspect-interface <inspect_interface> Set the inspect interface for the node --reset-inspect-interface Reset the inspect interface to its hardware type default --management-interface <management_interface> Set the management interface for the node --reset-management-interface Reset the management interface to its hardware type default --network-interface <network_interface> Set the network interface for the node --reset-network-interface Reset the network interface to its hardware type default --network-data <network data> Json string or a yaml file or - for stdin to read static network configuration for the baremetal node associated with this ironic node. Format of this file should comply with Nova network data metadata (network_data.json). Depending on ironic boot interface capabilities being used, network configuration may or may not been served to the node for offline network configuration. --power-interface <power_interface> Set the power interface for the node --reset-power-interface Reset the power interface to its hardware type default --raid-interface <raid_interface> Set the raid interface for the node --reset-raid-interface Reset the raid interface to its hardware type default --rescue-interface <rescue_interface> Set the rescue interface for the node --reset-rescue-interface Reset the rescue interface to its hardware type default --storage-interface <storage_interface> Set the storage interface for the node --reset-storage-interface Reset the storage interface to its hardware type default --vendor-interface <vendor_interface> Set the vendor interface for the node --reset-vendor-interface Reset the vendor interface to its hardware type default --reset-interfaces Reset all interfaces not specified explicitly to their default implementations. Only valid with --driver. --resource-class <resource_class> Set the resource class for the node --conductor-group <conductor_group> Set the conductor group for the node --automated-clean Enable automated cleaning for the node --no-automated-clean Explicitly disable automated cleaning for the node --protected Mark the node as protected --protected-reason <protected_reason> Set the reason of marking the node as protected --retired Mark the node as retired --retired-reason <retired_reason> Set the reason of marking the node as retired --target-raid-config <target_raid_config> Set the target raid configuration (json) for the node. This can be one of: 1. a file containing YAML data of the RAID configuration; 2. "-" to read the contents from standard input; or 3. a valid JSON string. --property <key=value> Property to set on this baremetal node (repeat option to set multiple properties) --extra <key=value> Extra to set on this baremetal node (repeat option to set multiple extras) --driver-info <key=value> Driver information to set on this baremetal node (repeat option to set multiple driver infos) --instance-info <key=value> Instance information to set on this baremetal node (repeat option to set multiple instance infos) --owner <owner> Set the owner for the node --lessee <lessee> Set the lessee for the node --description <description> Set the description for the node 10.63. baremetal node show Show baremetal node details Usage: Table 10.220. Positional arguments Value Summary <node> Name or uuid of the node (or instance uuid if --instance is specified) Table 10.221. Command arguments Value Summary -h, --help Show this help message and exit --instance <node> is an instance uuid. --fields <field> [<field> ... ] One or more node fields. only these fields will be fetched from the server. Table 10.222. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.223. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.224. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.225. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.64. baremetal node trait list List a node's traits. Usage: Table 10.226. Positional arguments Value Summary <node> Name or uuid of the node Table 10.227. Command arguments Value Summary -h, --help Show this help message and exit Table 10.228. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.229. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.230. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.231. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.65. baremetal node undeploy Set provision state of baremetal node to deleted Usage: Table 10.232. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.233. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, available. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 10.66. baremetal node unrescue Set provision state of baremetal node to unrescue Usage: Table 10.234. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.235. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, active. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 10.67. baremetal node unset Unset baremetal properties Usage: Table 10.236. Positional arguments Value Summary <node> Name or uuid of the node. Table 10.237. Command arguments Value Summary -h, --help Show this help message and exit --instance-uuid Unset instance uuid on this baremetal node --name Unset the name of the node --resource-class Unset the resource class of the node --target-raid-config Unset the target raid configuration of the node --property <key> Property to unset on this baremetal node (repeat option to unset multiple properties) --extra <key> Extra to unset on this baremetal node (repeat option to unset multiple extras) --driver-info <key> Driver information to unset on this baremetal node (repeat option to unset multiple driver informations) --instance-info <key> Instance information to unset on this baremetal node (repeat option to unset multiple instance informations) --chassis-uuid Unset chassis uuid on this baremetal node --bios-interface Unset bios interface on this baremetal node --boot-interface Unset boot interface on this baremetal node --console-interface Unset console interface on this baremetal node --deploy-interface Unset deploy interface on this baremetal node --inspect-interface Unset inspect interface on this baremetal node --network-data Unset network data on this baremetal port. --management-interface Unset management interface on this baremetal node --network-interface Unset network interface on this baremetal node --power-interface Unset power interface on this baremetal node --raid-interface Unset raid interface on this baremetal node --rescue-interface Unset rescue interface on this baremetal node --storage-interface Unset storage interface on this baremetal node --vendor-interface Unset vendor interface on this baremetal node --conductor-group Unset conductor group for this baremetal node (the default group will be used) --automated-clean Unset automated clean option on this baremetal node (the value from configuration will be used) --protected Unset the protected flag on the node --protected-reason Unset the protected reason (gets unset automatically when protected is unset) --retired Unset the retired flag on the node --retired-reason Unset the retired reason (gets unset automatically when retired is unset) --owner Unset the owner field of the node --lessee Unset the lessee field of the node --description Unset the description field of the node 10.68. baremetal node validate Validate a node's driver interfaces Usage: Table 10.238. Positional arguments Value Summary <node> Name or uuid of the node Table 10.239. Command arguments Value Summary -h, --help Show this help message and exit Table 10.240. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.241. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.242. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.243. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.69. baremetal node vif attach Attach VIF to a given node Usage: Table 10.244. Positional arguments Value Summary <node> Name or uuid of the node <vif-id> Name or uuid of the vif to attach to a node. Table 10.245. Command arguments Value Summary -h, --help Show this help message and exit --port-uuid <port-uuid> Uuid of the baremetal port to attach the vif to. --vif-info <key=value> Record arbitrary key/value metadata. can be specified multiple times. The mandatory id parameter cannot be specified as a key. 10.70. baremetal node vif detach Detach VIF from a given node Usage: Table 10.246. Positional arguments Value Summary <node> Name or uuid of the node <vif-id> Name or uuid of the vif to detach from a node. Table 10.247. Command arguments Value Summary -h, --help Show this help message and exit 10.71. baremetal node vif list Show attached VIFs for a node Usage: Table 10.248. Positional arguments Value Summary <node> Name or uuid of the node Table 10.249. Command arguments Value Summary -h, --help Show this help message and exit Table 10.250. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.251. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.252. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.253. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.72. baremetal port create Create a new port Usage: Table 10.254. Positional arguments Value Summary <address> Mac address for this port. Table 10.255. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this port belongs to. --uuid <uuid> Uuid of the port. --extra <key=value> Record arbitrary key/value metadata. argument can be specified multiple times. --local-link-connection <key=value> Key/value metadata describing local link connection information. Valid keys are switch_info , switch_id , port_id and hostname . The keys switch_id and port_id are required. In case of a Smart NIC port, the required keys are port_id and hostname . Argument can be specified multiple times. -l <key=value> Deprecated. please use --local-link-connection instead. Key/value metadata describing Local link connection information. Valid keys are switch_info , switch_id , and port_id . The keys switch_id and port_id are required. Can be specified multiple times. --pxe-enabled <boolean> Indicates whether this port should be used when pxe booting this Node. --port-group <uuid> Uuid of the port group that this port belongs to. --physical-network <physical network> Name of the physical network to which this port is connected. --is-smartnic Indicates whether this port is a smart nic port Table 10.256. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.257. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.258. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.259. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.73. baremetal port delete Delete port(s). Usage: Table 10.260. Positional arguments Value Summary <port> Uuid(s) of the port(s) to delete. Table 10.261. Command arguments Value Summary -h, --help Show this help message and exit 10.74. baremetal port group create Create a new baremetal port group. Usage: Table 10.262. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this port group belongs to. --address <mac-address> Mac address for this port group. --name NAME Name of the port group. --uuid UUID Uuid of the port group. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. --mode MODE Mode of the port group. for possible values, refer to https://www.kernel.org/doc/Documentation/networking/bo nding.txt. --property <key=value> Key/value property related to this port group's configuration. Can be specified multiple times. --support-standalone-ports Ports that are members of this port group can be used as stand-alone ports. (default) --unsupport-standalone-ports Ports that are members of this port group cannot be used as stand-alone ports. Table 10.263. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.264. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.265. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.266. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.75. baremetal port group delete Unregister baremetal port group(s). Usage: Table 10.267. Positional arguments Value Summary <port group> Port group(s) to delete (name or uuid). Table 10.268. Command arguments Value Summary -h, --help Show this help message and exit 10.76. baremetal port group list List baremetal port groups. Usage: Table 10.269. Command arguments Value Summary -h, --help Show this help message and exit --limit <limit> Maximum number of port groups to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <port group> Port group uuid (for example, of the last port group in the list from a request). Returns the list of port groups after this UUID. --sort <key>[:<direction>] Sort output by specified port group fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --address <mac-address> Only show information for the port group with this mac address. --node <node> Only list port groups of this node (name or uuid). --long Show detailed information about the port groups. --fields <field> [<field> ... ] One or more port group fields. only these fields will be fetched from the server. Can not be used when -- long is specified. Table 10.270. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.271. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.272. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.273. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.77. baremetal port group set Set baremetal port group properties. Usage: Table 10.274. Positional arguments Value Summary <port group> Name or uuid of the port group. Table 10.275. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Update uuid of the node that this port group belongs to. --address <mac-address> Mac address for this port group. --name <name> Name of the port group. --extra <key=value> Extra to set on this baremetal port group (repeat option to set multiple extras). --mode MODE Mode of the port group. for possible values, refer to https://www.kernel.org/doc/Documentation/networking/bo nding.txt. --property <key=value> Key/value property related to this port group's configuration (repeat option to set multiple properties). --support-standalone-ports Ports that are members of this port group can be used as stand-alone ports. --unsupport-standalone-ports Ports that are members of this port group cannot be used as stand-alone ports. 10.78. baremetal port group show Show baremetal port group details. Usage: Table 10.276. Positional arguments Value Summary <id> Uuid or name of the port group (or mac address if --address is specified). Table 10.277. Command arguments Value Summary -h, --help Show this help message and exit --address <id> is the mac address (instead of uuid or name) of the port group. --fields <field> [<field> ... ] One or more port group fields. only these fields will be fetched from the server. Table 10.278. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.279. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.280. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.281. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.79. baremetal port group unset Unset baremetal port group properties. Usage: Table 10.282. Positional arguments Value Summary <port group> Name or uuid of the port group. Table 10.283. Command arguments Value Summary -h, --help Show this help message and exit --name Unset the name of the port group. --address Unset the address of the port group. --extra <key> Extra to unset on this baremetal port group (repeat option to unset multiple extras). --property <key> Property to unset on this baremetal port group (repeat option to unset multiple properties). 10.80. baremetal port list List baremetal ports. Usage: Table 10.284. Command arguments Value Summary -h, --help Show this help message and exit --address <mac-address> Only show information for the port with this mac address. --node <node> Only list ports of this node (name or uuid). --port-group <port group> Only list ports of this port group (name or uuid). --limit <limit> Maximum number of ports to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <port> Port uuid (for example, of the last port in the list from a request). Returns the list of ports after this UUID. --sort <key>[:<direction>] Sort output by specified port fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --long Show detailed information about ports. --fields <field> [<field> ... ] One or more port fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 10.285. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.286. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.287. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.288. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.81. baremetal port set Set baremetal port properties. Usage: Table 10.289. Positional arguments Value Summary <port> Uuid of the port Table 10.290. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Set uuid of the node that this port belongs to --address <address> Set mac address for this port --extra <key=value> Extra to set on this baremetal port (repeat option to set multiple extras) --port-group <uuid> Set uuid of the port group that this port belongs to. --local-link-connection <key=value> Key/value metadata describing local link connection information. Valid keys are switch_info , switch_id , port_id and hostname . The keys switch_id and port_id are required. In case of a Smart NIC port, the required keys are port_id and hostname . Argument can be specified multiple times. --pxe-enabled Indicates that this port should be used when pxe booting this node (default) --pxe-disabled Indicates that this port should not be used when pxe booting this node --physical-network <physical network> Set the name of the physical network to which this port is connected. --is-smartnic Set port to be smart nic port 10.82. baremetal port show Show baremetal port details. Usage: Table 10.291. Positional arguments Value Summary <id> Uuid of the port (or mac address if --address is specified). Table 10.292. Command arguments Value Summary -h, --help Show this help message and exit --address <id> is the mac address (instead of the uuid) of the port. --fields <field> [<field> ... ] One or more port fields. only these fields will be fetched from the server. Table 10.293. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.294. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.295. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.296. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.83. baremetal port unset Unset baremetal port properties. Usage: Table 10.297. Positional arguments Value Summary <port> Uuid of the port. Table 10.298. Command arguments Value Summary -h, --help Show this help message and exit --extra <key> Extra to unset on this baremetal port (repeat option to unset multiple extras) --port-group Remove port from the port group --physical-network Unset the physical network on this baremetal port. --is-smartnic Set port as not smart nic port 10.84. baremetal volume connector create Create a new baremetal volume connector. Usage: Table 10.299. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this volume connector belongs to. --type <type> Type of the volume connector. can be iqn , ip , mac , wwnn , wwpn , port , portgroup . --connector-id <connector id> Id of the volume connector in the specified type. for example, the iSCSI initiator IQN for the node if the type is iqn . --uuid <uuid> Uuid of the volume connector. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. Table 10.300. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.301. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.302. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.303. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.85. baremetal volume connector delete Unregister baremetal volume connector(s). Usage: Table 10.304. Positional arguments Value Summary <volume connector> Uuid(s) of the volume connector(s) to delete. Table 10.305. Command arguments Value Summary -h, --help Show this help message and exit 10.86. baremetal volume connector list List baremetal volume connectors. Usage: Table 10.306. Command arguments Value Summary -h, --help Show this help message and exit --node <node> Only list volume connectors of this node (name or UUID). --limit <limit> Maximum number of volume connectors to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <volume connector> Volume connector uuid (for example, of the last volume connector in the list from a request). Returns the list of volume connectors after this UUID. --sort <key>[:<direction>] Sort output by specified volume connector fields and directions (asc or desc) (default:asc). Multiple fields and directions can be specified, separated by comma. --long Show detailed information about volume connectors. --fields <field> [<field> ... ] One or more volume connector fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 10.307. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.308. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.309. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.310. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.87. baremetal volume connector set Set baremetal volume connector properties. Usage: Table 10.311. Positional arguments Value Summary <volume connector> Uuid of the volume connector. Table 10.312. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this volume connector belongs to. --type <type> Type of the volume connector. can be iqn , ip , mac , wwnn , wwpn , port , portgroup . --connector-id <connector id> Id of the volume connector in the specified type. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. 10.88. baremetal volume connector show Show baremetal volume connector details. Usage: Table 10.313. Positional arguments Value Summary <id> Uuid of the volume connector. Table 10.314. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more volume connector fields. only these fields will be fetched from the server. Table 10.315. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.316. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.317. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.318. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.89. baremetal volume connector unset Unset baremetal volume connector properties. Usage: Table 10.319. Positional arguments Value Summary <volume connector> Uuid of the volume connector. Table 10.320. Command arguments Value Summary -h, --help Show this help message and exit --extra <key> Extra to unset (repeat option to unset multiple extras) 10.90. baremetal volume target create Create a new baremetal volume target. Usage: Table 10.321. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this volume target belongs to. --type <volume type> Type of the volume target, e.g. iscsi , fibre_channel . --property <key=value> Key/value property related to the type of this volume target. Can be specified multiple times. --boot-index <boot index> Boot index of the volume target. --volume-id <volume id> Id of the volume associated with this target. --uuid <uuid> Uuid of the volume target. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. Table 10.322. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.323. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.324. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.325. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.91. baremetal volume target delete Unregister baremetal volume target(s). Usage: Table 10.326. Positional arguments Value Summary <volume target> Uuid(s) of the volume target(s) to delete. Table 10.327. Command arguments Value Summary -h, --help Show this help message and exit 10.92. baremetal volume target list List baremetal volume targets. Usage: Table 10.328. Command arguments Value Summary -h, --help Show this help message and exit --node <node> Only list volume targets of this node (name or uuid). --limit <limit> Maximum number of volume targets to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <volume target> Volume target uuid (for example, of the last volume target in the list from a request). Returns the list of volume targets after this UUID. --sort <key>[:<direction>] Sort output by specified volume target fields and directions (asc or desc) (default:asc). Multiple fields and directions can be specified, separated by comma. --long Show detailed information about volume targets. --fields <field> [<field> ... ] One or more volume target fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 10.329. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.330. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.331. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.332. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.93. baremetal volume target set Set baremetal volume target properties. Usage: Table 10.333. Positional arguments Value Summary <volume target> Uuid of the volume target. Table 10.334. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this volume target belongs to. --type <volume type> Type of the volume target, e.g. iscsi , fibre_channel . --property <key=value> Key/value property related to the type of this volume target. Can be specified multiple times. --boot-index <boot index> Boot index of the volume target. --volume-id <volume id> Id of the volume associated with this target. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. 10.94. baremetal volume target show Show baremetal volume target details. Usage: Table 10.335. Positional arguments Value Summary <id> Uuid of the volume target. Table 10.336. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more volume target fields. only these fields will be fetched from the server. Table 10.337. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 10.338. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.339. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 10.340. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 10.95. baremetal volume target unset Unset baremetal volume target properties. Usage: Table 10.341. Positional arguments Value Summary <volume target> Uuid of the volume target. Table 10.342. Command arguments Value Summary -h, --help Show this help message and exit --extra <key> Extra to unset (repeat option to unset multiple extras) --property <key> Property to unset on this baremetal volume target (repeat option to unset multiple properties). | [
"openstack baremetal allocation create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--resource-class RESOURCE_CLASS] [--trait TRAITS] [--candidate-node CANDIDATE_NODES] [--name NAME] [--uuid UUID] [--owner OWNER] [--extra <key=value>] [--wait [<time-out>]] [--node NODE]",
"openstack baremetal allocation delete [-h] <allocation> [<allocation> ...]",
"openstack baremetal allocation list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <limit>] [--marker <allocation>] [--sort <key>[:<direction>]] [--node <node>] [--resource-class <resource_class>] [--state <state>] [--owner <owner>] [--long | --fields <field> [<field> ...]]",
"openstack baremetal allocation set [-h] [--name <name>] [--extra <key=value>] <allocation>",
"openstack baremetal allocation show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <id>",
"openstack baremetal allocation unset [-h] [--name] [--extra <key>] <allocation>",
"openstack baremetal chassis create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--extra <key=value>] [--uuid <uuid>]",
"openstack baremetal chassis delete [-h] <chassis> [<chassis> ...]",
"openstack baremetal chassis list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--fields <field> [<field> ...]] [--limit <limit>] [--long] [--marker <chassis>] [--sort <key>[:<direction>]]",
"openstack baremetal chassis set [-h] [--description <description>] [--extra <key=value>] <chassis>",
"openstack baremetal chassis show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <chassis>",
"openstack baremetal chassis unset [-h] [--description] [--extra <key>] <chassis>",
"openstack baremetal conductor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <limit>] [--marker <conductor>] [--sort <key>[:<direction>]] [--long | --fields <field> [<field> ...]]",
"openstack baremetal conductor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <conductor>",
"openstack baremetal create [-h] <file> [<file> ...]",
"openstack baremetal deploy template create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--uuid <uuid>] [--extra <key=value>] --steps <steps> <name>",
"openstack baremetal deploy template delete [-h] <template> [<template> ...]",
"openstack baremetal deploy template list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <limit>] [--marker <template>] [--sort <key>[:<direction>]] [--long | --fields <field> [<field> ...]]",
"openstack baremetal deploy template set [-h] [--name <name>] [--steps <steps>] [--extra <key=value>] <template>",
"openstack baremetal deploy template show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <template>",
"openstack baremetal deploy template unset [-h] [--extra <key>] <template>",
"openstack baremetal driver list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--type <type>] [--long | --fields <field> [<field> ...]]",
"openstack baremetal driver passthru call [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--arg <key=value>] [--http-method <http-method>] <driver> <method>",
"openstack baremetal driver passthru list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <driver>",
"openstack baremetal driver property list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <driver>",
"openstack baremetal driver raid property list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <driver>",
"openstack baremetal driver show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <driver>",
"openstack baremetal node abort [-h] <node>",
"openstack baremetal node add trait [-h] <node> <trait> [<trait> ...]",
"openstack baremetal node adopt [-h] [--wait [<time-out>]] <node>",
"openstack baremetal node bios setting list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long | --fields <field> [<field> ...]] <node>",
"openstack baremetal node bios setting show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <node> <setting name>",
"openstack baremetal node boot device set [-h] [--persistent] <node> <device>",
"openstack baremetal node boot device show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--supported] <node>",
"openstack baremetal node boot mode set [-h] <node> <boot_mode>",
"openstack baremetal node clean [-h] [--wait [<time-out>]] --clean-steps <clean-steps> <node>",
"openstack baremetal node console disable [-h] <node>",
"openstack baremetal node console enable [-h] <node>",
"openstack baremetal node console show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <node>",
"openstack baremetal node create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--chassis-uuid <chassis>] --driver <driver> [--driver-info <key=value>] [--property <key=value>] [--extra <key=value>] [--uuid <uuid>] [--name <name>] [--bios-interface <bios_interface>] [--boot-interface <boot_interface>] [--console-interface <console_interface>] [--deploy-interface <deploy_interface>] [--inspect-interface <inspect_interface>] [--management-interface <management_interface>] [--network-data <network data>] [--network-interface <network_interface>] [--power-interface <power_interface>] [--raid-interface <raid_interface>] [--rescue-interface <rescue_interface>] [--storage-interface <storage_interface>] [--vendor-interface <vendor_interface>] [--resource-class <resource_class>] [--conductor-group <conductor_group>] [--automated-clean | --no-automated-clean] [--owner <owner>] [--lessee <lessee>] [--description <description>]",
"openstack baremetal node delete [-h] <node> [<node> ...]",
"openstack baremetal node deploy [-h] [--wait [<time-out>]] [--config-drive <config-drive>] [--deploy-steps <deploy-steps>] <node>",
"openstack baremetal node history get [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <node> <event>",
"openstack baremetal node history list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] <node>",
"openstack baremetal node inject nmi [-h] <node>",
"openstack baremetal node inspect [-h] [--wait [<time-out>]] <node>",
"openstack baremetal node list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <limit>] [--marker <node>] [--sort <key>[:<direction>]] [--maintenance | --no-maintenance] [--retired | --no-retired] [--fault <fault>] [--associated | --unassociated] [--provision-state <provision state>] [--driver <driver>] [--resource-class <resource class>] [--conductor-group <conductor_group>] [--conductor <conductor>] [--chassis <chassis UUID>] [--owner <owner>] [--lessee <lessee>] [--description-contains <description_contains>] [--long | --fields <field> [<field> ...]]",
"openstack baremetal node maintenance set [-h] [--reason <reason>] <node>",
"openstack baremetal node maintenance unset [-h] <node>",
"openstack baremetal node manage [-h] [--wait [<time-out>]] <node>",
"openstack baremetal node passthru call [-h] [--arg <key=value>] [--http-method <http-method>] <node> <method>",
"openstack baremetal node passthru list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <node>",
"openstack baremetal node power off [-h] [--power-timeout <power-timeout>] [--soft] <node>",
"openstack baremetal node power on [-h] [--power-timeout <power-timeout>] <node>",
"openstack baremetal node provide [-h] [--wait [<time-out>]] <node>",
"openstack baremetal node reboot [-h] [--soft] [--power-timeout <power-timeout>] <node>",
"openstack baremetal node rebuild [-h] [--wait [<time-out>]] [--config-drive <config-drive>] [--deploy-steps <deploy-steps>] <node>",
"openstack baremetal node remove trait [-h] [--all] <node> [<trait> ...]",
"openstack baremetal node rescue [-h] [--wait [<time-out>]] --rescue-password <rescue-password> <node>",
"openstack baremetal node secure boot off [-h] <node>",
"openstack baremetal node secure boot on [-h] <node>",
"openstack baremetal node set [-h] [--instance-uuid <uuid>] [--name <name>] [--chassis-uuid <chassis UUID>] [--driver <driver>] [--bios-interface <bios_interface> | --reset-bios-interface] [--boot-interface <boot_interface> | --reset-boot-interface] [--console-interface <console_interface> | --reset-console-interface] [--deploy-interface <deploy_interface> | --reset-deploy-interface] [--inspect-interface <inspect_interface> | --reset-inspect-interface] [--management-interface <management_interface> | --reset-management-interface] [--network-interface <network_interface> | --reset-network-interface] [--network-data <network data>] [--power-interface <power_interface> | --reset-power-interface] [--raid-interface <raid_interface> | --reset-raid-interface] [--rescue-interface <rescue_interface> | --reset-rescue-interface] [--storage-interface <storage_interface> | --reset-storage-interface] [--vendor-interface <vendor_interface> | --reset-vendor-interface] [--reset-interfaces] [--resource-class <resource_class>] [--conductor-group <conductor_group>] [--automated-clean | --no-automated-clean] [--protected] [--protected-reason <protected_reason>] [--retired] [--retired-reason <retired_reason>] [--target-raid-config <target_raid_config>] [--property <key=value>] [--extra <key=value>] [--driver-info <key=value>] [--instance-info <key=value>] [--owner <owner>] [--lessee <lessee>] [--description <description>] <node>",
"openstack baremetal node show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--instance] [--fields <field> [<field> ...]] <node>",
"openstack baremetal node trait list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <node>",
"openstack baremetal node undeploy [-h] [--wait [<time-out>]] <node>",
"openstack baremetal node unrescue [-h] [--wait [<time-out>]] <node>",
"openstack baremetal node unset [-h] [--instance-uuid] [--name] [--resource-class] [--target-raid-config] [--property <key>] [--extra <key>] [--driver-info <key>] [--instance-info <key>] [--chassis-uuid] [--bios-interface] [--boot-interface] [--console-interface] [--deploy-interface] [--inspect-interface] [--network-data] [--management-interface] [--network-interface] [--power-interface] [--raid-interface] [--rescue-interface] [--storage-interface] [--vendor-interface] [--conductor-group] [--automated-clean] [--protected] [--protected-reason] [--retired] [--retired-reason] [--owner] [--lessee] [--description] <node>",
"openstack baremetal node validate [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <node>",
"openstack baremetal node vif attach [-h] [--port-uuid <port-uuid>] [--vif-info <key=value>] <node> <vif-id>",
"openstack baremetal node vif detach [-h] <node> <vif-id>",
"openstack baremetal node vif list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <node>",
"openstack baremetal port create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --node <uuid> [--uuid <uuid>] [--extra <key=value>] [--local-link-connection <key=value>] [-l <key=value>] [--pxe-enabled <boolean>] [--port-group <uuid>] [--physical-network <physical network>] [--is-smartnic] <address>",
"openstack baremetal port delete [-h] <port> [<port> ...]",
"openstack baremetal port group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --node <uuid> [--address <mac-address>] [--name NAME] [--uuid UUID] [--extra <key=value>] [--mode MODE] [--property <key=value>] [--support-standalone-ports | --unsupport-standalone-ports]",
"openstack baremetal port group delete [-h] <port group> [<port group> ...]",
"openstack baremetal port group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <limit>] [--marker <port group>] [--sort <key>[:<direction>]] [--address <mac-address>] [--node <node>] [--long | --fields <field> [<field> ...]]",
"openstack baremetal port group set [-h] [--node <uuid>] [--address <mac-address>] [--name <name>] [--extra <key=value>] [--mode MODE] [--property <key=value>] [--support-standalone-ports | --unsupport-standalone-ports] <port group>",
"openstack baremetal port group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--address] [--fields <field> [<field> ...]] <id>",
"openstack baremetal port group unset [-h] [--name] [--address] [--extra <key>] [--property <key>] <port group>",
"openstack baremetal port list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--address <mac-address>] [--node <node>] [--port-group <port group>] [--limit <limit>] [--marker <port>] [--sort <key>[:<direction>]] [--long | --fields <field> [<field> ...]]",
"openstack baremetal port set [-h] [--node <uuid>] [--address <address>] [--extra <key=value>] [--port-group <uuid>] [--local-link-connection <key=value>] [--pxe-enabled | --pxe-disabled] [--physical-network <physical network>] [--is-smartnic] <port>",
"openstack baremetal port show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--address] [--fields <field> [<field> ...]] <id>",
"openstack baremetal port unset [-h] [--extra <key>] [--port-group] [--physical-network] [--is-smartnic] <port>",
"openstack baremetal volume connector create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --node <uuid> --type <type> --connector-id <connector id> [--uuid <uuid>] [--extra <key=value>]",
"openstack baremetal volume connector delete [-h] <volume connector> [<volume connector> ...]",
"openstack baremetal volume connector list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--node <node>] [--limit <limit>] [--marker <volume connector>] [--sort <key>[:<direction>]] [--long | --fields <field> [<field> ...]]",
"openstack baremetal volume connector set [-h] [--node <uuid>] [--type <type>] [--connector-id <connector id>] [--extra <key=value>] <volume connector>",
"openstack baremetal volume connector show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <id>",
"openstack baremetal volume connector unset [-h] [--extra <key>] <volume connector>",
"openstack baremetal volume target create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --node <uuid> --type <volume type> [--property <key=value>] --boot-index <boot index> --volume-id <volume id> [--uuid <uuid>] [--extra <key=value>]",
"openstack baremetal volume target delete [-h] <volume target> [<volume target> ...]",
"openstack baremetal volume target list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--node <node>] [--limit <limit>] [--marker <volume target>] [--sort <key>[:<direction>]] [--long | --fields <field> [<field> ...]]",
"openstack baremetal volume target set [-h] [--node <uuid>] [--type <volume type>] [--property <key=value>] [--boot-index <boot index>] [--volume-id <volume id>] [--extra <key=value>] <volume target>",
"openstack baremetal volume target show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <id>",
"openstack baremetal volume target unset [-h] [--extra <key>] [--property <key>] <volume target>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/baremetal |
Chapter 13. Distributed tracing | Chapter 13. Distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In AMQ Streams on Red Hat Enterprise Linux, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. Tracing complements the available JMX metrics . How AMQ Streams supports tracing Support for tracing is provided for the following clients and components. Kafka clients: Kafka producers and consumers Kafka Streams API applications Kafka components: Kafka Connect Kafka Bridge MirrorMaker MirrorMaker 2.0 To enable tracing, you perform four high-level tasks: Enable a Jaeger tracer. Enable the Interceptors: For Kafka clients, you instrument your application code using the OpenTracing Apache Kafka Client Instrumentation library (included with AMQ Streams). For Kafka components, you set configuration properties for each component. Set tracing environment variables . Deploy the client or component. When instrumented, clients generate trace data. For example, when producing messages or writing offsets to the log. Traces are sampled according to a sampling strategy and then visualized in the Jaeger user interface. Note Tracing is not supported for Kafka brokers. Setting up tracing for applications and systems beyond AMQ Streams is outside the scope of this chapter. To learn more about this subject, search for "inject and extract" in the OpenTracing documentation . Outline of procedures To set up tracing for AMQ Streams, follow these procedures in order: Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing Set up tracing for MirrorMaker, MirrorMaker 2.0, and Kafka Connect: Enable tracing for MirrorMaker Enable tracing for MirrorMaker 2.0 Enable tracing for Kafka Connect Enable tracing for the Kafka Bridge Prerequisites The Jaeger backend components are deployed to your Kubernetes cluster. For deployment instructions, see the Jaeger documentation . 13.1. Overview of OpenTracing and Jaeger AMQ Streams uses the OpenTracing and Jaeger projects. OpenTracing is an API specification that is independent from the tracing or monitoring system. The OpenTracing APIs are used to instrument application code Instrumented applications generate traces for individual transactions across the distributed system Traces are composed of spans that define specific units of work over time Jaeger is a tracing system for microservices-based distributed systems. Jaeger implements the OpenTracing APIs and provides client libraries for instrumentation The Jaeger user interface allows you to query, filter, and analyze trace data Additional resources OpenTracing Jaeger 13.2. Setting up tracing for Kafka clients Initialize a Jaeger tracer to instrument your client applications for distributed tracing. 13.2.1. Initializing a Jaeger tracer for Kafka clients Configure and initialize a Jaeger tracer using a set of tracing environment variables . Procedure In each client application: Add Maven dependencies for Jaeger to the pom.xml file for the client application: <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.5.0.redhat-00001</version> </dependency> Define the configuration of the Jaeger tracer using the tracing environment variables . Create the Jaeger tracer from the environment variables that you defined in step two: Tracer tracer = Configuration.fromEnv().getTracer(); Note For alternative ways to initialize a Jaeger tracer, see the Java OpenTracing library documentation. Register the Jaeger tracer as a global tracer: GlobalTracer.register(tracer); A Jaeger tracer is now initialized for the client application to use. 13.2.2. Instrumenting producers and consumers for tracing Use a Decorator pattern or Interceptors to instrument your Java producer and consumer application code for tracing. Procedure In the application code of each producer and consumer application: Add a Maven dependency for OpenTracing to the producer or consumer's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00004</version> </dependency> Instrument your client application code using either a Decorator pattern or Interceptors. To use a Decorator pattern: // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use Interceptors: // Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); Custom span names in a Decorator pattern A span is a logical unit of work in Jaeger, with an operation name, start time, and duration. To use a Decorator pattern to instrument your producer and consumer applications, define custom span names by passing a BiFunction object as an additional argument when creating the TracingKafkaProducer and TracingKafkaConsumer objects. The OpenTracing Apache Kafka Client Instrumentation library includes several built-in span names. Example: Using custom span names to instrument client application code in a Decorator pattern // Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> "CUSTOM_PRODUCER_NAME"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have "CUSTOM_PRODUCER_NAME" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // "receive" -> "RECEIVE" Built-in span names When defining custom span names, you can use the following BiFunctions in the ClientSpanNameProvider class. If no spanNameProvider is specified, CONSUMER_OPERATION_NAME and PRODUCER_OPERATION_NAME are used. Table 13.1. BiFunctions to define custom span names BiFunction Description CONSUMER_OPERATION_NAME, PRODUCER_OPERATION_NAME Returns the operationName as the span name: "receive" for consumers and "send" for producers. CONSUMER_PREFIXED_OPERATION_NAME(String prefix), PRODUCER_PREFIXED_OPERATION_NAME(String prefix) Returns a String concatenation of prefix and operationName . CONSUMER_TOPIC, PRODUCER_TOPIC Returns the name of the topic that the message was sent to or retrieved from in the format (record.topic()) . PREFIXED_CONSUMER_TOPIC(String prefix), PREFIXED_PRODUCER_TOPIC(String prefix) Returns a String concatenation of prefix and the topic name in the format (record.topic()) . CONSUMER_OPERATION_NAME_TOPIC, PRODUCER_OPERATION_NAME_TOPIC Returns the operation name and the topic name: "operationName - record.topic()" . CONSUMER_PREFIXED_OPERATION_NAME_TOPIC(String prefix), PRODUCER_PREFIXED_OPERATION_NAME_TOPIC(String prefix) Returns a String concatenation of prefix and "operationName - record.topic()" . 13.2.3. Instrumenting Kafka Streams applications for tracing Instrument Kafka Streams applications for distributed tracing using a supplier interface. This enables the Interceptors in the application. Procedure In each Kafka Streams application: Add the opentracing-kafka-streams dependency to the Kafka Streams application's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.15.redhat-00004</version> </dependency> Create an instance of the TracingKafkaClientSupplier supplier interface: KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); Provide the supplier interface to KafkaStreams : KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); 13.3. Setting up tracing for MirrorMaker and Kafka Connect This section describes how to configure MirrorMaker, MirrorMaker 2.0, and Kafka Connect for distributed tracing. You must enable a Jaeger tracer for each component. 13.3.1. Enabling tracing for MirrorMaker Enable distributed tracing for MirrorMaker by passing the Interceptor properties as consumer and producer configuration parameters. Messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker component. Procedure Configure and enable a Jaeger tracer. Edit the /opt/kafka/config/consumer.properties file. Add the following Interceptor property: consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor Edit the /opt/kafka/config/producer.properties file. Add the following Interceptor property: producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor Start MirrorMaker with the consumer and producer configuration files as parameters: su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2 13.3.2. Enabling tracing for MirrorMaker 2.0 Enable distributed tracing for MirrorMaker 2.0 by defining the Interceptor properties in the MirrorMaker 2.0 properties file. Messages are traced between Kafka clusters. The trace data records messages entering and leaving the MirrorMaker 2.0 component. Procedure Configure and enable a Jaeger tracer. Edit the MirrorMaker 2.0 configuration properties file, ./config/connect-mirror-maker.properties , and add the following properties: header.converter=org.apache.kafka.connect.converters.ByteArrayConverter 1 consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor 2 producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor 1 Prevents Kafka Connect from converting message headers (containing trace IDs) to base64 encoding. This ensures that messages are the same in both the source and the target clusters. 2 Enables the Interceptors for MirrorMaker 2.0. Start MirrorMaker 2.0 using the instructions in Section 8.7, "Synchronizing data between Kafka clusters using MirrorMaker 2.0" . Additional resources Chapter 8, Using AMQ Streams with MirrorMaker 2.0 13.3.3. Enabling tracing for Kafka Connect Enable distributed tracing for Kafka Connect using configuration properties. Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. Procedure Configure and enable a Jaeger tracer. Edit the relevant Kafka Connect configuration file. If you are running Kafka Connect in standalone mode, edit the /opt/kafka/config/connect-standalone.properties file. If you are running Kafka Connect in distributed mode, edit the /opt/kafka/config/connect-distributed.properties file. Add the following properties to the configuration file: producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor Save the configuration file. Set tracing environment variables and then run Kafka Connect in standalone or distributed mode. The Interceptors in Kafka Connect's internal consumers and producers are now enabled. Additional resources Section 13.5, "Environment variables for tracing" Section 7.1.3, "Running Kafka Connect in standalone mode" Section 7.2.3, "Running distributed Kafka Connect" 13.4. Enabling tracing for the Kafka Bridge Enable distributed tracing for the Kafka Bridge by editing the Kafka Bridge configuration file. You can then deploy a Kafka Bridge instance that is configured for distributed tracing to the host operating system. Traces are generated when: The Kafka Bridge sends messages to HTTP clients and consumes messages from HTTP clients HTTP clients send HTTP requests to send and receive messages through the Kafka Bridge To have end-to-end tracing, you must configure tracing in your HTTP clients. Procedure Edit the config/application.properties file in the Kafka Bridge installation directory. Remove the code comments from the following line: bridge.tracing=jaeger Save the configuration file. Run the bin/kafka_bridge_run.sh script using the configuration properties as a parameter: cd kafka-bridge-0.xy.x.redhat-0000x ./bin/kafka_bridge_run.sh --config-file=config/application.properties The Interceptors in the Kafka Bridge's internal consumers and producers are now enabled. 13.5. Environment variables for tracing Use these environment variables when configuring a Jaeger tracer for Kafka clients and components. Note The tracing environment variables are part of the Jaeger project and are subject to change. For the latest environment variables, see the Jaeger documentation . Table 13.2. Jaeger tracer environment variables Property Required Description JAEGER_SERVICE_NAME Yes The name of the Jaeger tracer service. JAEGER_AGENT_HOST No The hostname for communicating with the jaeger-agent through the User Datagram Protocol (UDP). JAEGER_AGENT_PORT No The port used for communicating with the jaeger-agent through UDP. JAEGER_ENDPOINT No The traces endpoint. Only define this variable if the client application will bypass the jaeger-agent and connect directly to the jaeger-collector . JAEGER_AUTH_TOKEN No The authentication token to send to the endpoint as a bearer token. JAEGER_USER No The username to send to the endpoint if using basic authentication. JAEGER_PASSWORD No The password to send to the endpoint if using basic authentication. JAEGER_PROPAGATION No A comma-separated list of formats to use for propagating the trace context. Defaults to the standard Jaeger format. Valid values are jaeger and b3 . JAEGER_REPORTER_LOG_SPANS No Indicates whether the reporter should also log the spans. JAEGER_REPORTER_MAX_QUEUE_SIZE No The reporter's maximum queue size. JAEGER_REPORTER_FLUSH_INTERVAL No The reporter's flush interval, in ms. Defines how frequently the Jaeger reporter flushes span batches. JAEGER_SAMPLER_TYPE No The sampling strategy to use for client traces: Constant Probabilistic Rate Limiting Remote (the default) To sample all traces, use the Constant sampling strategy with a parameter of 1. For an overview of the Jaeger architecture and client sampling configuration parameters, see the Jaeger documentation . JAEGER_SAMPLER_PARAM No The sampler parameter (number). JAEGER_SAMPLER_MANAGER_HOST_PORT No The hostname and port to use if a Remote sampling strategy is selected. JAEGER_TAGS No A comma-separated list of tracer-level tags that are added to all reported spans. The value can also refer to an environment variable using the format USD{envVarName:default} . :default is optional and identifies a value to use if the environment variable cannot be found. | [
"<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.5.0.redhat-00001</version> </dependency>",
"Tracer tracer = Configuration.fromEnv().getTracer();",
"GlobalTracer.register(tracer);",
"<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00004</version> </dependency>",
"// Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"// Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"// Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> \"CUSTOM_PRODUCER_NAME\"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have \"CUSTOM_PRODUCER_NAME\" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // \"receive\" -> \"RECEIVE\"",
"<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.15.redhat-00004</version> </dependency>",
"KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer);",
"KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();",
"consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor",
"producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor",
"su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2",
"header.converter=org.apache.kafka.connect.converters.ByteArrayConverter 1 consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor 2 producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor",
"producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor",
"bridge.tracing=jaeger",
"cd kafka-bridge-0.xy.x.redhat-0000x ./bin/kafka_bridge_run.sh --config-file=config/application.properties"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/assembly-distributed-tracing-str |
Chapter 4. Installing a cluster on OpenStack with Kuryr | Chapter 4. Installing a cluster on OpenStack with Kuryr Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. In OpenShift Container Platform version 4.13, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP) that uses Kuryr SDN. To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.13 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . 4.2. About Kuryr SDN Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 4.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 4.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Server groups 2 - plus 1 for each additional availability zone in each machine pool Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 4.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 4.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 4.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. 4.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. You can configure your cluster to use the Octavia OVN driver after your RHOSP cloud is upgraded from version 13 to version 16. 4.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Kuryr SDN does not support automatic unidling by a service. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. 4.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 4.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 4.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 4.3.8. Load balancing requirements for user-provisioned infrastructure Important Deployment with User-Managed Load Balancers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Before you install OpenShift Container Platform, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.2. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.3. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.3.8.1. Example load balancer configuration for clusters that are deployed with user-managed load balancers This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.1. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 4.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 4.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 4.8. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 4.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 4.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 4.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note Kuryr installations default to HTTP proxies. Prerequisites For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter: USD ip route add <cluster_network_cidr> via <installer_subnet_gateway> The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates. You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.11. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 4.11.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 4.4. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 4.11.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 4.5. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 4.11.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 4.6. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 4.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 4.7. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 4.11.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 4.8. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. Additional networks that are attached to a control plane machine are also attached to the bootstrap node. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.loadbalancer Whether or not to use the default, internal load balancer. If the value is set to UserManaged , this default load balancer is disabled so that you can deploy a cluster that uses an external, user-managed load balancer. If the parameter is not set, or if the value is OpenShiftManagedDefault , the cluster uses the default load balancer. UserManaged or OpenShiftManagedDefault . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 4.11.6. RHOSP parameters for failure domains Important RHOSP failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat OpenStack Platform (RHOSP) deployments do not have a single implementation of failure domains. Instead, availability zones are defined individually for each service, such as the compute service, Nova; the networking service, Neutron; and the storage service, Cinder. Beginning with OpenShift Container Platform 4.13, there is a unified definition of failure domains for RHOSP deployments that covers all supported availability zone types. You can use failure domains to control related aspects of Nova, Neutron, and Cinder configurations from a single place. In RHOSP, a port describes a network connection and maps to an interface inside a compute machine. A port also: Is defined by a network or by one more or subnets Connects a machine to one or more subnets Failure domains group the services of your deployment by using ports. If you use failure domains, each machine connects to: The portTarget object with the ID control-plane while that object exists. All non-control-plane portTarget objects within its own failure domain. All networks in the machine pool's additionalNetworkIDs list. To configure failure domains for a machine pool, edit availability zone and port target parameters under controlPlane.platform.openstack.failureDomains . Table 4.9. RHOSP parameters for failure domains Parameter Description Values platform.openstack.failuredomains.computeAvailabilityZone An availability zone for the server. If not specified, the cluster default is used. The name of the availability zone. For example, nova-1 . platform.openstack.failuredomains.storageAvailabilityZone An availability zone for the root volume. If not specified, the cluster default is used. The name of the availability zone. For example, cinder-1 . platform.openstack.failuredomains.portTargets A list of portTarget objects, each of which defines a network connection to attach to machines within a failure domain. A list of portTarget objects. platform.openstack.failuredomains.portTargets.portTarget.id The ID of an individual port target. To select that port target as the first network for machines, set the value of this parameter to control-plane . If this parameter has a different value, it is ignored. control-plane or an arbitrary string. platform.openstack.failuredomains.portTargets.portTarget.network Required. The name or ID of the network to attach to machines in the failure domain. A network object that contains either a name or UUID. For example: network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 or: network: name: my-network-1 platform.openstack.failuredomains.portTargets.portTarget.fixedIPs Subnets to allocate fixed IP addresses to. These subnets must exist within the same network as the port. A list of subnet objects. Note You cannot combine zone fields and failure domains. If you want to use failure domains, the controlPlane.zone and controlPlane.rootVolume.zone fields must be left unset. 4.11.7. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 4.11.8. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OVN-Kubernetes network plugin, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType . This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 The cluster network plugin to install. The supported values are Kuryr , OVNKubernetes , and OpenShiftSDN . The default value is OVNKubernetes . 3 4 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 4.11.9. Example installation configuration section that uses failure domains Important RHOSP failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following section of an install-config.yaml file demonstrates the use of failure domains in a cluster to deploy on Red Hat OpenStack Platform (RHOSP): # ... controlPlane: name: master platform: openstack: type: m1.large failureDomains: - computeAvailabilityZone: 'nova-1' storageAvailabilityZone: 'cinder-1' portTargets: - id: control-plane network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 - computeAvailabilityZone: 'nova-2' storageAvailabilityZone: 'cinder-2' portTargets: - id: control-plane network: id: 39a7b82a-a8a4-45a4-ba5a-288569a6edd1 - computeAvailabilityZone: 'nova-3' storageAvailabilityZone: 'cinder-3' portTargets: - id: control-plane network: id: 8e4b4e0d-3865-4a9b-a769-559270271242 featureSet: TechPreviewNoUpgrade # ... 4.11.10. Installation configuration for a cluster on OpenStack with a user-managed load balancer Important Deployment on OpenStack with User-Managed Load Balancers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following example install-config.yaml file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer. apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 featureSet: TechPreviewNoUpgrade 3 1 Regardless of which load balancer you use, the load balancer is deployed to this subnet. 2 The UserManaged value indicates that you are using an user-managed load balancer. 3 Because user-managed load balancers are in Technology Preview, you must include the TechPreviewNoUpgrade value to deploy a cluster that uses a user-managed load balancer. 4.11.11. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 4.11.11.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 4.11.11.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 4.11.12. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 4.11.13. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 4.12. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.13. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 4.13.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 4.13.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 4.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.15. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 4.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"openstack role add --user <user> --project <project> swiftoperator",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"ip route add <cluster_network_cidr> via <installer_subnet_gateway>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6",
"network: name: my-network-1",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"controlPlane: name: master platform: openstack: type: m1.large failureDomains: - computeAvailabilityZone: 'nova-1' storageAvailabilityZone: 'cinder-1' portTargets: - id: control-plane network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 - computeAvailabilityZone: 'nova-2' storageAvailabilityZone: 'cinder-2' portTargets: - id: control-plane network: id: 39a7b82a-a8a4-45a4-ba5a-288569a6edd1 - computeAvailabilityZone: 'nova-3' storageAvailabilityZone: 'cinder-3' portTargets: - id: control-plane network: id: 8e4b4e0d-3865-4a9b-a769-559270271242 featureSet: TechPreviewNoUpgrade",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 featureSet: TechPreviewNoUpgrade 3",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_openstack/installing-openstack-installer-kuryr |
Chapter 3. Considerations for Red Hat Gluster Storage | Chapter 3. Considerations for Red Hat Gluster Storage 3.1. Firewall and Port Access Red Hat Gluster Storage requires access to a number of ports in order to work properly. Ensure that port access is available as indicated in Section 3.1.2, "Port Access Requirements" . 3.1.1. Configuring the Firewall Firewall configuration tools differ between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. For Red Hat Enterprise Linux 6, use the iptables command to open a port: Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide For Red Hat Enterprise Linux 7, if default ports are not already in use by other services, it is usually simpler to add a service rather than open a port: However, if the default ports are already in use, you can open a specific port with the following command: For example: 3.1.2. Port Access Requirements Table 3.1. Open the following ports on all storage servers Connection source TCP Ports UDP Ports Recommended for Used for Any authorized network entity with a valid SSH key 22 - All configurations Remote backup using geo-replication Any authorized network entity; be cautious not to clash with other RPC services. 111 111 All configurations RPC port mapper and RPC bind Any authorized SMB/CIFS client 139 and 445 137 and 138 Sharing storage using SMB/CIFS SMB/CIFS protocol Any authorized NFS clients 2049 2049 Sharing storage using Gluster NFS or NFS-Ganesha Exports using NFS protocol All servers in the Samba-CTDB cluster 4379 - Sharing storage using SMB and Gluster NFS CTDB Any authorized network entity 24007 - All configurations Management processes using glusterd Any authorized network entity 55555 - All configurations Gluster events daemon If you are upgrading from a version of Red Hat Gluster Storage to the latest version 3.5.4, the port used for glusterevents daemon should be modified to be in the ephemral range. NFSv3 clients 662 662 Sharing storage using NFS-Ganesha and Gluster NFS statd NFSv3 clients 32803 32803 Sharing storage using NFS-Ganesha and Gluster NFS NLM protocol NFSv3 clients sending mount requests - 32769 Sharing storage using Gluster NFS Gluster NFS MOUNT protocol NFSv3 clients sending mount requests 20048 20048 Sharing storage using NFS-Ganesha NFS-Ganesha MOUNT protocol NFS clients 875 875 Sharing storage using NFS-Ganesha NFS-Ganesha RQUOTA protocol (fetching quota information) Servers in pacemaker/corosync cluster 2224 - Sharing storage using NFS-Ganesha pcsd Servers in pacemaker/corosync cluster 3121 - Sharing storage using NFS-Ganesha pacemaker_remote Servers in pacemaker/corosync cluster - 5404 and 5405 Sharing storage using NFS-Ganesha corosync Servers in pacemaker/corosync cluster 21064 - Sharing storage using NFS-Ganesha dlm Any authorized network entity 49152 - 49664 - All configurations Brick communication ports. The total number of ports required depends on the number of bricks on the node. One port is required for each brick on the machine. Gluster Clients 1023 or 49152 - Applicable when system ports are already being used in the machines. Communication between brick and client processes. Table 3.2. Open the following ports on NFS-Ganesha and Gluster NFS storage clients Connection source TCP Ports UDP Ports Recommended for Used for NFSv3 servers 662 662 Sharing storage using NFS-Ganesha and Gluster NFS statd NFSv3 servers 32803 32803 Sharing storage using NFS-Ganesha and Gluster NFS NLM protocol | [
"iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5667 -j ACCEPT service iptables save",
"firewall-cmd --zone= zone_name --add-service=glusterfs firewall-cmd --zone= zone_name --add-service=glusterfs --permanent",
"firewall-cmd --zone= zone_name --add-port= port / protocol firewall-cmd --zone= zone_name --add-port= port / protocol --permanent",
"firewall-cmd --zone=public --add-port=5667/tcp firewall-cmd --zone=public --add-port=5667/tcp --permanent"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Getting_Started |
Chapter 6. Preparing to perform an EUS-to-EUS update | Chapter 6. Preparing to perform an EUS-to-EUS update Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform <4.y> to <4.y+1>, and then to <4.y+2>. You cannot update from OpenShift Container Platform <4.y> to <4.y+2> directly. However, administrators who want to update between two Extended Update Support (EUS) versions can do so incurring only a single reboot of non-control plane hosts. Important EUS-to-EUS updates are only viable between even-numbered minor versions of OpenShift Container Platform. There are a number of caveats to consider when attempting an EUS-to-EUS update. EUS-to-EUS updates are only offered after updates between all versions involved have been made available in stable channels. If you encounter issues during or after upgrading to the odd-numbered minor version but before upgrading to the even-numbered version, then remediation of those issues may require that non-control plane hosts complete the update to the odd-numbered version before moving forward. You can do a partial update by updating the worker or custom pool nodes to accommodate the time it takes for maintenance. You can complete the update process during multiple maintenance windows by pausing at intermediate steps. However, plan to complete the entire update within 60 days. This is critical to ensure that normal cluster automation processes are completed including those associated with certificate rotation. Until the machine config pools are unpaused and the update is complete, some features and bugs fixes in <4.y+1> and <4.y+2> of OpenShift Container Platform are not available. All the clusters might update using EUS channels for a conventional update without pools paused, but only clusters with non control-plane MachineConfigPools objects can do EUS-to-EUS update with pools paused. 6.1. EUS-to-EUS update The following procedure pauses all non-master machine config pools and performs updates from OpenShift Container Platform 4.8 to 4.9 to 4.10, then unpauses the previously paused machine config pools. Following this procedure reduces the total update duration and the number of times worker nodes are restarted. Prerequisites Review the release notes for OpenShift Container Platform 4.9 and 4.10 Review the release notes and product lifecycles for any layered products and Operator Lifecycle Manager (OLM) Operators. Some may require updates either before or during an EUS-to-EUS update. 6.1.1. EUS-to-EUS update using the web console Prerequisites Verify that machine config pools are unpaused. Have access to the web console as a user with admin privileges. Procedure Using the Administrator perspective on the web console, update any Operator Lifecycle Manager (OLM) Operators to the versions that are compatible with your intended updated version. You can find more information on how to perform this action in "Updating installed Operators"; see "Additional resources". Verify that all machine config pools display a status of Up to date and that no machine config pool displays a status of UPDATING . To view the status of all machine config pools, click Compute MachineConfigPools and review the contents of the Update status column. Note If your machine config pools have an Updating status, please wait for this status to change to Up to date . This process could take several minutes. Set your channel to eus-<4.y+2> . To set your channel, click Administration Cluster Settings Channel . You can edit your channel by clicking on the current hyperlinked channel. Pause all worker machine pools except for the master pool. You can perform this action on the MachineConfigPools tab under the Compute page. Select the vertical ellipses to the machine config pool you'd like to pause and click Pause updates . Update to version <4.y+1> and complete up to the Save step. You can find more information on how to perform these actions in "Updating a cluster by using the web console"; see "Additional resources". Ensure that the <4.y+1> updates are complete by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. If necessary, update your OLM Operators by using the Administrator perspective on the web console. You can find more information on how to perform these actions in "Updating installed Operators"; see "Additional resources". Update to version <4.y+2> and complete up to the Save step. You can find more information on how to perform these actions in "Updating a cluster by using the web console"; see "Additional resources". Ensure that the <4.y+2> update is complete by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. Unpause all previously paused machine config pools. You can perform this action on the MachineConfigPools tab under the Compute page. Select the vertical ellipses to the machine config pool you'd like to unpause and click Unpause updates . Important If pools are not unpaused, the cluster is not permitted to upgrade to any future minor versions, and maintenance tasks such as certificate rotation are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools are updated and that your cluster has completed the update to version <4.y+2>. You can verify that your pools have updated on the MachineConfigPools tab under the Compute page by confirming that the Update status has a value of Up to date . You can verify that your cluster has completed the update by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. Additional resources Preparing for an Operator update Updating a cluster by using the web console Updating installed Operators 6.1.2. EUS-to-EUS update using the CLI Prerequisites Verify that machine config pools are unpaused. Update the OpenShift CLI ( oc ) to the target version before each update. Important It is highly discouraged to skip this prerequisite. If the OpenShift CLI ( oc ) is not updated to the target version before your update, unexpected issues may occur. Procedure Using the Administrator perspective on the web console, update any Operator Lifecycle Manager (OLM) Operators to the versions that are compatible with your intended updated version. You can find more information on how to perform this action in "Updating installed Operators"; see "Additional resources". Verify that all machine config pools display a status of UPDATED and that no machine config pool displays a status of UPDATING . To view the status of all machine config pools, run the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False Your current version is <4.y>, and your intended version to update is <4.y+2>. Change to the eus-<4.y+2> channel by running the following command: USD oc adm upgrade channel eus-<4.y+2> Note If you receive an error message indicating that eus-<4.y+2> is not one of the available channels, this indicates that Red Hat is still rolling out EUS version updates. This rollout process generally takes 45-90 days starting at the GA date. Pause all worker machine pools except for the master pool by running the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":true}}' Note You cannot pause the master pool. Update to the latest version by running the following command: USD oc adm upgrade --to-latest Example output Updating to latest version <4.y+1.z> Review the cluster version to ensure that the updates are complete by running the following command: USD oc adm upgrade Example output Cluster version is <4.y+1.z> ... Update to version <4.y+2> by running the following command: USD oc adm upgrade --to-latest Retrieve the cluster version to ensure that the <4.y+2> updates are complete by running the following command: USD oc adm upgrade Example output Cluster version is <4.y+1.z> ... To update your worker nodes to <4.y+2>, unpause all previously paused machine config pools by running the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":false}}' Important If pools are not unpaused, the cluster is not permitted to update to any future minor versions, and maintenance tasks such as certificate rotation are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools are updated and that the update to version <4.y+2> is complete by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False Additional resources Updating installed Operators 6.1.3. EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager In addition to the EUS-to-EUS update steps mentioned for the web console and CLI, there are additional steps to consider when performing EUS-to-EUS updates for clusters with the following: Layered products Operators installed through Operator Lifecycle Manager (OLM) What is a layered product? Layered products refer to products that are made of multiple underlying products that are intended to be used together and cannot be broken into individual subscriptions. For examples of layered OpenShift Container Platform products, see Layered Offering On OpenShift . As you perform an EUS-to-EUS update for the clusters of layered products and those of Operators that have been installed through OLM, you must complete the following: Ensure that all of your Operators previously installed through OLM are updated to their latest version in their latest channel. Updating the Operators ensures that they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. For information on how to update your Operators, see "Preparing for an Operator update" in "Additional resources". Confirm the cluster version compatibility between the current and intended Operator versions. You can verify which versions your OLM Operators are compatible with by using the Red Hat OpenShift Container Platform Operator Update Information Checker . As an example, here are the steps to perform an EUS-to-EUS update from <4.y> to <4.y+2> for OpenShift Data Foundation (ODF). This can be done through the CLI or web console. For information on how to update clusters through your desired interface, see EUS-to-EUS update using the web console and "EUS-to-EUS update using the CLI" in "Additional resources". Example workflow Pause the worker machine pools. Upgrade OpenShift <4.y> OpenShift <4.y+1>. Upgrade ODF <4.y> ODF <4.y+1>. Upgrade OpenShift <4.y+1> OpenShift <4.y+2>. Upgrade to ODF <4.y+2>. Unpause the worker machine pools. Note The upgrade to ODF <4.y+2> can happen before or after worker machine pools have been unpaused. Additional resources Preparing for an Operator update EUS-to-EUS update using the web console EUS-to-EUS update using the CLI | [
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False",
"oc adm upgrade channel eus-<4.y+2>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc adm upgrade --to-latest",
"Updating to latest version <4.y+1.z>",
"oc adm upgrade",
"Cluster version is <4.y+1.z>",
"oc adm upgrade --to-latest",
"oc adm upgrade",
"Cluster version is <4.y+1.z>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/updating_clusters/preparing-eus-eus-upgrade |
Chapter 3. Installation and update | Chapter 3. Installation and update 3.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 3.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 3.1. OpenShift Container Platform installation targets and dependencies 3.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.17 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 3.1.3. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.17, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.17, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power(R) IBM Z(R) or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. 3.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.17, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 3.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 3.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue related to the update path, such as incompatibility or availability. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 3.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 3.4. steps Selecting a cluster installation method and preparing it for users | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/architecture/architecture-installation |
Integrating Google Cloud data into cost management | Integrating Google Cloud data into cost management Cost Management Service 1-latest Learn how to add and configure your Google Cloud integration Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_google_cloud_data_into_cost_management/index |
Preface | Preface You can install Red Hat Developer Hub on OpenShift Container Platform by using one of the following installers: The Red Hat Developer Hub Operator Ready for immediate use in OpenShift Container Platform after an administrator installs it with OperatorHub Uses Operator Lifecycle Management (OLM) to manage automated subscription updates on OpenShift Container Platform Requires preinstallation of Operator Lifecycle Management (OLM) to manage automated subscription updates on Kubernetes The Red Hat Developer Hub Helm chart Ready for immediate use in both OpenShift Container Platform and Kubernetes Requires manual installation and management Use the installation method that best meets your needs and preferences. Additional resources For more information about choosing an installation method, see Helm Charts vs. Operators For more information about the Operator method, see Understanding Operators . For more information about the Helm chart method, see Understanding Helm . | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_openshift_container_platform/pr01 |
22.17. Configuring the Hardware Clock Update | 22.17. Configuring the Hardware Clock Update To configure the system clock to update the hardware clock, also known as the real-time clock (RTC), once after executing ntpdate , add the following line to /etc/sysconfig/ntpdate : To update the hardware clock from the system clock, issue the following command as root : When the system clock is being synchronized by ntpd , the kernel will in turn update the RTC every 11 minutes automatically. | [
"SYNC_HWCLOCK=yes",
"~]# hwclock --systohc"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-configuring_the_hardware_clock_update |
Chapter 2. Deploy OpenShift Data Foundation using Red Hat Ceph storage | Chapter 2. Deploy OpenShift Data Foundation using Red Hat Ceph storage Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create an OpenShift Data Foundation cluster for the external Ceph storage system. 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Creating an OpenShift Data Foundation Cluster for external Ceph storage system You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on VMware vSphere or user-provisioned bare metal infrastructures. Prerequisites A valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure the OpenShift Container Platform version is 4.16 or above before deploying OpenShift Data Foundation 4.16. OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS versions in the External Mode tab. Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access . It is recommended that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation. Procedure Click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation and then click Create StorageSystem . In the Backing storage page, select the following options: Select Full deployment for the Deployment type option. Select Connect an external storage platform from the available options. Select Red Hat Ceph Storage for Storage platform . Click . In the Connection details page, provide the necessary information: Click on the Download Script link to download the python script for extracting Ceph cluster details. For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with the admin key . Run the following command on the RHCS node to view the list of available arguments: Important Use python instead of python3 if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster. You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment). Note Use the yum install cephadm command and then the cephadm command to deploy your RHCS cluster using containers. You must pull the RHCS container images using the cephadm command, rather than using yum for installing the Ceph packages onto nodes. For more information, see RHCS product documentation . To retrieve the external cluster details from the RHCS cluster, run the following command: For example: where, RBD parameters rbd-data-pool-name A mandatory parameter that is used for providing block storage in OpenShift Data Foundation. rbd-metadata-ec-pool-name (Optional) The name of the erasure coded RBD metadata pool. RGW parameters rgw-endpoint (Optional) This parameter is required only if the object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> Note A fully-qualified domain name (FQDN) is also supported in the format <FQDN>:<PORT> . rgw-pool-prefix (Optional) The prefix of the RGW pools. If not specified, the default prefix is default . rgw-tls-cert-path (Optional) The file path of the RADOS Gateway endpoint TLS certificate. To provide the TLS certificate and RGW endpoint details to the helper script, ceph-external-cluster-details-exporter.py , run the following command: This creates a resource to create a Ceph Object Store CR such as Kubernetes secret containing the TLS certificate. All the intermediate certificates including private keys need to be stored in the certificate file. rgw-skip-tls (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (not recommended). Monitoring parameters monitoring-endpoint (Optional) This parameter accepts comma-separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. monitoring-endpoint-port (Optional) It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. Ceph parameters ceph-conf (Optional) The name of the Ceph configuration file. run-as-user (Optional) This parameter is used for providing name for the Ceph user which is created by the script. If this parameter is not specified, a default user name client.healthchecker is created. The permissions for the new user is set as: caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool= RGW_POOL_PREFIX.rgw.meta , allow r pool= .rgw.root , allow rw pool= RGW_POOL_PREFIX.rgw.control , allow rx pool= RGW_POOL_PREFIX.rgw.log , allow x pool= RGW_POOL_PREFIX.rgw.buckets.index CephFS parameters cephfs-metadata-pool-name (Optional) The name of the CephFS metadata pool. cephfs-data-pool-name (Optional) The name of the CephFS data pool. cephfs-filesystem-name (Optional) The name of the CephFS filesystem. Output parameters dry-run (Optional) This parameter helps to print the executed commands without running them. output (Optional) The file where the output is required to be stored. Multicluster parameters cluster-name (Optional) The Ceph cluster name. restricted-auth-permission (Optional) This parameter restricts cephCSIKeyrings auth permissions to specific pools and clusters. Mandatory flags that need to be set with this are rbd-data-pool-name and cluster-name . You can also pass the cephfs-filesystem-name flag if there is CephFS user restriction so that permission is restricted to a particular CephFS filesystem. Note This parameter must be applied only for the new deployments. To restrict csi-users per pool and per cluster, you need to create new csi-users and new secrets for those csi-users . Example with restricted auth permission: Example of JSON output generated using the python script: Save the JSON output to a file with .json extension Note For OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation. Run the command when there is a multi-tenant deployment in which the RHCS cluster is already connected to OpenShift Data Foundation deployment with a lower version. Click Browse to select and upload the JSON file. The content of the JSON file is populated and displayed in the text box. Click The button is enabled only after you upload the .json file. In the Review and create page, review if all the details are correct: To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-external-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick. To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation for external Ceph storage system . 2.3. Verifying your OpenShift Data Foundation installation for external Ceph storage system Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.3.1. Verifying the state of the pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Data Foundation components" Verify that the following pods are in running state: Table 2.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) Note If an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created. rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) 2.3.2. Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.3.3. Verifying that the Multicloud Object Gateway is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed. Note The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation . 2.3.4. Verifying that the storage classes are created and listed Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-external-storagecluster-ceph-rbd ocs-external-storagecluster-ceph-rgw ocs-external-storagecluster-cephfs openshift-storage.noobaa.io Note If an MDS is not deployed in the external cluster, ocs-external-storagecluster-cephfs storage class will not be created. If RGW is not deployed in the external cluster, the ocs-external-storagecluster-ceph-rgw storage class will not be created. For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation 2.3.5. Verifying that Ceph cluster is connected Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster. 2.3.6. Verifying that storage cluster is ready Run the following command to verify if the storage cluster is ready and the External option is set to true . 2.3.7. Verifying the the creation of Ceph Object Store CRD Run the following command to verify if the Ceph Object Store CRD is created in the external Red Hat Ceph Storage cluster. | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"python3 ceph-external-cluster-details-exporter.py --help",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> [optional arguments]",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs",
"python3 ceph-external-clustergw-endpoint r-details-exporter.py --rbd-data-pool-name <rbd block pool name> --rgw-endpoint <ip_address>:<port> --rgw-tls-cert-path <file path containing cert>",
"python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}}]",
"python3 ceph-external-cluster-details-exporter.py --upgrade",
"oc get cephcluster -n openshift-storage NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL ocs-external-storagecluster-cephcluster 30m Connected Cluster connected successfully HEALTH_OK true",
"oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 30m Ready true 2021-11-17T09:09:52Z 4.15.0",
"oc get cephobjectstore -n openshift-storage\" NAME PHASE ENDPOINT SECUREENDPOINT AGE object-store1 Ready <http://IP/FQDN:port> 15m"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_in_external_mode/deploy-openshift-data-foundation-using-red-hat-ceph-storage |
4.5. CND Preference Page | 4.5. CND Preference Page The CND Preference Page allows you to save CND files using the various CND notations available. The notation type determines the size and readability of the output. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_2_modeshape_tools/cnd_preference_page |
Chapter 1. Preparing to install with the Agent-based Installer | Chapter 1. Preparing to install with the Agent-based Installer 1.1. About the Agent-based Installer The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster, with an available release image. The configuration is in the same format as for the installer-provisioned infrastructure and user-provisioned infrastructure installation methods. The Agent-based Installer can also optionally generate or accept Zero Touch Provisioning (ZTP) custom resources. ZTP allows you to provision new edge sites with declarative configurations of bare-metal equipment. Table 1.1. Agent-based Installer supported architectures CPU architecture Connected installation Disconnected installation Comments 64-bit x86 [✓] [✓] 64-bit ARM [✓] [✓] ppc64le [✓] [✓] s390x [✓] [✓] ISO boot is not supported. Instead, use PXE assets. 1.2. Understanding Agent-based Installer As an OpenShift Container Platform user, you can leverage the advantages of the Assisted Installer hosted service in disconnected environments. The Agent-based installation comprises a bootable ISO that contains the Assisted discovery agent and the Assisted Service. Both are required to perform the cluster installation, but the latter runs on only one of the hosts. Note Currently, ISO boot is not supported on IBM Z(R) ( s390x ) architecture. The recommended method is by using PXE assets, which requires specifying additional kernel arguments. The openshift-install agent create image subcommand generates an ephemeral ISO based on the inputs that you provide. You can choose to provide inputs through the following manifests: Preferred: install-config.yaml agent-config.yaml or Optional: ZTP manifests cluster-manifests/cluster-deployment.yaml cluster-manifests/agent-cluster-install.yaml cluster-manifests/pull-secret.yaml cluster-manifests/infraenv.yaml cluster-manifests/cluster-image-set.yaml cluster-manifests/nmstateconfig.yaml mirror/registries.conf mirror/ca-bundle.crt 1.2.1. Agent-based Installer workflow One of the control plane hosts runs the Assisted Service at the start of the boot process and eventually becomes the bootstrap host. This node is called the rendezvous host (node 0). The Assisted Service ensures that all the hosts meet the requirements and triggers an OpenShift Container Platform cluster deployment. All the nodes have the Red Hat Enterprise Linux CoreOS (RHCOS) image written to the disk. The non-bootstrap nodes reboot and initiate a cluster deployment. Once the nodes are rebooted, the rendezvous host reboots and joins the cluster. The bootstrapping is complete and the cluster is deployed. Figure 1.1. Node installation workflow You can install a disconnected OpenShift Container Platform cluster through the openshift-install agent create image subcommand for the following topologies: A single-node OpenShift Container Platform cluster (SNO) : A node that is both a master and worker. A three-node OpenShift Container Platform cluster : A compact cluster that has three master nodes that are also worker nodes. Highly available OpenShift Container Platform cluster (HA) : Three master nodes with any number of worker nodes. 1.2.2. Recommended resources for topologies Recommended cluster resources for the following topologies: Table 1.2. Recommended cluster resources Topology Number of control plane nodes Number of compute nodes vCPU Memory Storage Single-node cluster 1 0 8 vCPUs 16 GB of RAM 120 GB Compact cluster 3 0 or 1 8 vCPUs 16 GB of RAM 120 GB HA cluster 3 2 and above 8 vCPUs 16 GB of RAM 120 GB In the install-config.yaml , specify the platform on which to perform the installation. The following platforms are supported: baremetal vsphere external none Important For platform none : The none option requires the provision of DNS name resolution and load balancing infrastructure in your cluster. See Requirements for a cluster using the platform "none" option in the "Additional resources" section for more information. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. Additional resources Requirements for a cluster using the platform "none" option Increase the network MTU Adding worker nodes to single-node OpenShift clusters 1.3. About FIPS compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. Federal Information Processing Standards (FIPS) compliance is one of the most critical components required in highly secure environments to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 1.4. Configuring FIPS through the Agent-based Installer During a cluster deployment, the Federal Information Processing Standards (FIPS) change is applied when the Red Hat Enterprise Linux CoreOS (RHCOS) machines are deployed in your cluster. For Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. You can enable FIPS mode through the preferred method of install-config.yaml and agent-config.yaml : You must set value of the fips field to True in the install-config.yaml file: Sample install-config.yaml.file apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True Optional: If you are using the GitOps ZTP manifests, you must set the value of fips as True in the Agent-install.openshift.io/install-config-overrides field in the agent-cluster-install.yaml file: Sample agent-cluster-install.yaml file apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{"fips": True}' name: sno-cluster namespace: sno-cluster-test Additional resources OpenShift Security Guide Book Support for FIPS cryptography 1.5. Host configuration You can make additional configurations for each host on the cluster in the agent-config.yaml file, such as network configurations and root device hints. Important For each host you configure, you must provide the MAC address of an interface on the host to specify which host you are configuring. 1.5.1. Host roles Each host in the cluster is assigned a role of either master or worker . You can define the role for each host in the agent-config.yaml file by using the role parameter. If you do not assign a role to the hosts, the roles will be assigned at random during installation. It is recommended to explicitly define roles for your hosts. The rendezvousIP must be assigned to a host with the master role. This can be done manually or by allowing the Agent-based Installer to assign the role. Important You do not need to explicitly define the master role for the rendezvous host, however you cannot create configurations that conflict with this assignment. For example, if you have 4 hosts with 3 of the hosts explicitly defined to have the master role, the last host that is automatically assigned the worker role during installation cannot be configured as the rendezvous host. Sample agent-config.yaml file apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8 1.5.2. About root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 1.3. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. If you use the udevadm command to retrieve the wwn value, and the command outputs a value for ID_WWN_WITH_EXTENSION , then you must use this value to specify the wwn subfield. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master rootDeviceHints: deviceName: "/dev/sda" 1.6. About networking The rendezvous IP must be known at the time of generating the agent ISO, so that during the initial boot all the hosts can check in to the assisted service. If the IP addresses are assigned using a Dynamic Host Configuration Protocol (DHCP) server, then the rendezvousIP field must be set to an IP address of one of the hosts that will become part of the deployed control plane. In an environment without a DHCP server, you can define IP addresses statically. In addition to static IP addresses, you can apply any network configuration that is in NMState format. This includes VLANs and NIC bonds. 1.6.1. DHCP Preferred method: install-config.yaml and agent-config.yaml You must specify the value for the rendezvousIP field. The networkConfig fields can be left blank: Sample agent-config.yaml.file apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 1 The IP address for the rendezvous host. 1.6.2. Static networking Preferred method: install-config.yaml and agent-config.yaml Sample agent-config.yaml.file cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.1 6 -hop-interface: eno1 table-id: 254 EOF 1 If a value is not specified for the rendezvousIP field, one address will be chosen from the static IP addresses specified in the networkConfig fields. 2 The MAC address of an interface on the host, used to determine which host to apply the configuration to. 3 The static IP address of the target bare metal host. 4 The static IP address's subnet prefix for the target bare metal host. 5 The DNS server for the target bare metal host. 6 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. Optional method: GitOps ZTP manifests The optional method of the GitOps ZTP custom resources comprises 6 custom resources; you can configure static IPs in the nmstateconfig.yaml file. apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.122.1 4 -hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5 1 The static IP address of the target bare metal host. 2 The static IP address's subnet prefix for the target bare metal host. 3 The DNS server for the target bare metal host. 4 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 5 The MAC address of an interface on the host, used to determine which host to apply the configuration to. The rendezvous IP is chosen from the static IP addresses specified in the config fields. 1.7. Requirements for a cluster using the platform "none" option This section describes the requirements for an Agent-based OpenShift Container Platform installation that is configured to use the platform none option. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 1.7.1. Platform "none" DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The control plane and compute machines Reverse DNS resolution is also required for the Kubernetes API, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. The following DNS records are required for an OpenShift Container Platform cluster using the platform none option and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.4. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. 1.7.1.1. Example DNS configuration for platform "none" clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform using the platform none option. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a platform "none" cluster The following example is a BIND zone file that shows sample A records for name resolution in a cluster using the platform none option. Example 1.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 5 6 Provides name resolution for the control plane machines. 7 8 Provides name resolution for the compute machines. Example DNS PTR record configuration for a platform "none" cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a cluster using the platform none option. Example 1.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 4 5 Provides reverse DNS resolution for the control plane machines. 6 7 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 1.7.2. Platform "none" Load balancing requirements Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note These requirements do not apply to single-node OpenShift clusters using the platform none option. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 1.5. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 1.6. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 1.7.2.1. Example load balancer configuration for platform "none" clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters using the platform none option. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 1.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 Port 22623 handles the machine config server traffic and points to the control plane machines. 3 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 4 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 1.8. Example: Bonds and VLAN interface node network configuration The following agent-config.yaml file is an example of a manifest for bond and VLAN interfaces. apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: "150" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.10.10.10 9 -hop-interface: bond0.300 10 table-id: 254 1 3 Name of the interface. 2 The type of interface. This example creates a VLAN. 4 The type of interface. This example creates a bond. 5 The mac address of the interface. 6 The mode attribute specifies the bonding mode. 7 Specifies the MII link monitoring frequency in milliseconds. This example inspects the bond link every 150 milliseconds. 8 Optional: Specifies the search and server settings for the DNS server. 9 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 10 hop interface for the node traffic. 1.9. Example: Bonds and SR-IOV dual-nic node network configuration Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following agent-config.yaml file is an example of a manifest for dual port NIC with a bond and SR-IOV interfaces: apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field contains information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates an ethernet interface. 5 Set this to false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set this to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Additional resources Configuring network bonding 1.10. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 architecture: amd64 controlPlane: 4 name: master replicas: 1 5 architecture: amd64 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{"auths": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 This parameter controls the number of compute machines that the Agent-based installation waits to discover before triggering the installation process. It is the number of compute machines that must be booted with the generated ISO. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 5 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 8 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 10 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 11 You must set the platform to none for a single-node cluster. You can set the platform to vsphere , baremetal , or none for multi-node clusters. Note If you set the platform to vsphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 13 This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 1.11. Validation checks before agent ISO creation The Agent-based Installer performs validation checks on user defined YAML files before the ISO is created. Once the validations are successful, the agent ISO is created. install-config.yaml baremetal , vsphere and none platforms are supported. The networkType parameter must be OVNKubernetes in the case of none platform. apiVIPs and ingressVIPs parameters must be set for bare metal and vSphere platforms. Some host-specific fields in the bare metal platform configuration that have equivalents in agent-config.yaml file are ignored. A warning message is logged if these fields are set. agent-config.yaml Each interface must have a defined MAC address. Additionally, all interfaces must have a different MAC address. At least one interface must be defined for each host. World Wide Name (WWN) vendor extensions are not supported in root device hints. The role parameter in the host object must have a value of either master or worker . 1.11.1. ZTP manifests agent-cluster-install.yaml For IPv6, the only supported value for the networkType parameter is OVNKubernetes . The OpenshiftSDN value can be used only for IPv4. cluster-image-set.yaml The ReleaseImage parameter must match the release defined in the installer. 1.12. steps Installing a cluster Installing a cluster with customizations | [
"apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True",
"apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{\"fips\": True}' name: sno-cluster namespace: sno-cluster-test",
"apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8",
"- name: master-0 role: master rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1",
"cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 6 next-hop-interface: eno1 table-id: 254 EOF",
"apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 4 next-hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: \"150\" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.10.10.10 9 next-hop-interface: bond0.300 10 table-id: 254",
"apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 architecture: amd64 controlPlane: 4 name: master replicas: 1 5 architecture: amd64 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{\"auths\": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14",
"networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_an_on-premise_cluster_with_the_agent-based_installer/preparing-to-install-with-agent-based-installer |
Running applications | Running applications Red Hat build of MicroShift 4.18 Running applications in MicroShift Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/running_applications/index |
20.44. Disk I/O Throttling | 20.44. Disk I/O Throttling The virsh blkdeviotune command sets disk I/O throttling for a specified guest virtual machine. This can prevent a guest virtual machine from over utilizing shared resources and thus impacting the performance of other guest virtual machines. The following format should be used: The only required parameter is the domain name of the guest virtual machine. To list the domain name, run the virsh domblklist command. The --config , --live , and --current arguments function the same as in Section 20.43, "Setting Schedule Parameters" . If no limit is specified, it will query current I/O limits setting. Otherwise, alter the limits with the following flags: --total-bytes-sec - specifies total throughput limit in bytes per second. --read-bytes-sec - specifies read throughput limit in bytes per second. --write-bytes-sec - specifies write throughput limit in bytes per second. --total-iops-sec - specifies total I/O operations limit per second. --read-iops-sec - specifies read I/O operations limit per second. --write-iops-sec - specifies write I/O operations limit per second. For more information, see the blkdeviotune section of the virsh man page. For an example domain XML see Figure 23.27, "Devices - Hard drives, floppy disks, CD-ROMs Example" . | [
"virsh blkdeviotune domain < device > [[--config] [--live] | [--current]] [[total-bytes-sec] | [read-bytes-sec] [write-bytes-sec]] [[total-iops-sec] [read-iops-sec] [write-iops-sec]]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Disk_IO_throttling |
13.2.3. Preparing an Initial RAM Disk Update | 13.2.3. Preparing an Initial RAM Disk Update Important This is an advanced procedure that you should consider only if you cannot perform a driver update with any other method. The Red Hat Enterprise Linux installation program can load updates for itself early in the installation process from a RAM disk - an area of your computer's memory that temporarily behaves as if it were a disk. You can use this same capability to load driver updates. To perform a driver update during installation, your computer must be able to boot from a yaboot installation server, and you must have one available on your network. Refer to Chapter 30, Setting Up an Installation Server for instructions on using a yaboot installation server. To make the driver update available on your installation server: Place the driver update image file on your installation server. Usually, you would do this by downloading it to the server from a location on the Internet specified by Red Hat or your hardware vendor. Names of driver update image files end in .iso . Copy the driver update image file into the /tmp/initrd_update directory. Rename the driver update image file to dd.img . At the command line, change into the /tmp/initrd_update directory, type the following command, and press Enter : Copy the file /tmp/initrd_update.img into the directory the holds the target that you want to use for installation. This directory is placed under the /var/lib/tftpboot/yaboot/ directory. For example, /var/lib/tftpboot/yaboot/rhel6/ might hold the yaboot installation target for Red Hat Enterprise Linux 6. Edit the /var/lib/tftpboot/yaboot/yaboot.conf file to include an entry that includes the initial RAM disk update that you just created, in the following format: Where target is the target that you want to use for installation. Refer to Section 13.3.4, "Select an Installation Server Target That Includes a Driver Update" to learn how to use an initial RAM disk update during installation. Example 13.1. Preparing an initial RAM disk update from a driver update image file In this example, driver_update.iso is a driver update image file that you downloaded from the Internet to a directory on your installation server. The target on your installation server that you want to boot from is located in /var/lib/tftpboot/yaboot/rhel6/ At the command line, change to the directory that holds the file and enter the following commands: Edit the /var/lib/tftpboot/yaboot/yaboot.conf file and include the following entry: | [
"find . | cpio --quiet -o -H newc | gzip -9 >/tmp/initrd_update.img",
"image= target /vmlinuz label= target -dd initrd= target /initrd.img, target /dd.img",
"cp driver_update.iso /tmp/initrd_update/dd.img cd /tmp/initrd_update find . | cpio --quiet -c -o -H newc | gzip -9 >/tmp/initrd_update.img cp /tmp/initrd_update.img /tftpboot/yaboot/rhel6/dd.img",
"image=rhel6/vmlinuz label=rhel6-dd initrd=rhel6/initrd.img,rhel6/dd.img"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-preparing_an_initial_ram_disk_update-ppc |
Chapter 1. Overview of images | Chapter 1. Overview of images 1.1. Understanding containers, images, and image streams Containers, images, and image streams are important concepts to understand when you set out to create and manage containerized software. An image holds a set of software that is ready to run, while a container is a running instance of a container image. An image stream provides a way of storing different versions of the same basic image. Those different versions are represented by different tags on the same image name. 1.2. Images Containers in OpenShift Container Platform are based on OCI- or Docker-formatted container images . An image is a binary that includes all of the requirements for running a single container, as well as metadata describing its needs and capabilities. You can think of it as a packaging technology. Containers only have access to resources defined in the image unless you give the container additional access when creating it. By deploying the same image in multiple containers across multiple hosts and load balancing between them, OpenShift Container Platform can provide redundancy and horizontal scaling for a service packaged into an image. You can use the podman or docker CLI directly to build images, but OpenShift Container Platform also supplies builder images that assist with creating new images by adding your code or configuration to existing images. Because applications develop over time, a single image name can actually refer to many different versions of the same image. Each different image is referred to uniquely by its hash, a long hexadecimal number such as fd44297e2ddb050ec4f... , which is usually shortened to 12 characters, such as fd44297e2ddb . You can create , manage , and use container images. 1.3. Image registry An image registry is a content server that can store and serve container images. For example: registry.redhat.io A registry contains a collection of one or more image repositories, which contain one or more tagged images. Red Hat provides a registry at registry.redhat.io for subscribers. OpenShift Container Platform can also supply its own OpenShift image registry for managing custom container images. 1.4. Image repository An image repository is a collection of related container images and tags identifying them. For example, the OpenShift Container Platform Jenkins images are in the repository: docker.io/openshift/jenkins-2-centos7 1.5. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 1.6. Image IDs An image ID is a SHA (Secure Hash Algorithm) code that can be used to pull an image. A SHA image ID cannot change. A specific SHA identifier always references the exact same container image content. For example: docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324 1.7. Containers The basic units of OpenShift Container Platform applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. The word container is defined as a specific running or paused instance of a container image. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service, often called a micro-service, such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. The Docker project developed a convenient management interface for Linux containers on a host. More recently, the Open Container Initiative has developed open standards for container formats and container runtimes. OpenShift Container Platform and Kubernetes add the ability to orchestrate OCI- and Docker-formatted containers across multi-host installations. Though you do not directly interact with container runtimes when using OpenShift Container Platform, understanding their capabilities and terminology is important for understanding their role in OpenShift Container Platform and how your applications function inside of containers. Tools such as podman can be used to replace docker command-line tools for running and managing containers directly. Using podman , you can experiment with containers separately from OpenShift Container Platform. 1.8. Why use imagestreams An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes. Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository. You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively. For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image. However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the , presumably known good image. The source images can be stored in any of the following: OpenShift Container Platform's integrated registry. An external registry, for example registry.redhat.io or quay.io. Other image streams in the OpenShift Container Platform cluster. When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image. The image stream metadata is stored in the etcd instance along with other cluster information. Using image streams has several significant benefits: You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line. You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects. You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration. You can share images using fine-grained access control and quickly distribute images across your teams. If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application do not break unexpectedly. You can configure security around who can view and use the images through permissions on the image stream objects. Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams. You can manage image streams, use image streams with Kubernetes resources , and trigger updates on image stream updates . 1.9. Image stream tags An image stream tag is a named pointer to an image in an image stream. An image stream tag is similar to a container image tag. 1.10. Image stream images An image stream image allows you to retrieve a specific container image from a particular image stream where it is tagged. An image stream image is an API resource object that pulls together some metadata about a particular image SHA identifier. 1.11. Image stream triggers An image stream trigger causes a specific action when an image stream tag changes. For example, importing can cause the value of the tag to change, which causes a trigger to fire when there are deployments, builds, or other resources listening for those. 1.12. How you can use the Cluster Samples Operator During the initial startup, the Operator creates the default samples resource to initiate the creation of the image streams and templates. You can use the Cluster Samples Operator to manage the sample image streams and templates stored in the openshift namespace. As a cluster administrator, you can use the Cluster Samples Operator to: Configure the Operator . Use the Operator with an alternate registry . 1.13. About templates A template is a definition of an object to be replicated. You can use templates to build and deploy configurations. 1.14. How you can use Ruby on Rails As a developer, you can use Ruby on Rails to: Write your application: Set up a database. Create a welcome page. Configure your application for OpenShift Container Platform. Store your application in Git. Deploy your application in OpenShift Container Platform: Create the database service. Create the frontend service. Create a route for your application. | [
"registry.redhat.io",
"docker.io/openshift/jenkins-2-centos7",
"registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2",
"docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/images/overview-of-images |
Chapter 1. Cluster APIs | Chapter 1. Cluster APIs 1.1. IPAddress [ipam.cluster.x-k8s.io/v1beta1] Description IPAddress is the Schema for the ipaddress API. Type object 1.2. IPAddressClaim [ipam.cluster.x-k8s.io/v1beta1] Description IPAddressClaim is the Schema for the ipaddressclaim API. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cluster_apis/cluster-apis |
Chapter 3. Quarkus CXF overview | Chapter 3. Quarkus CXF overview This chapter provides information about Quarkus CXF extensions, CXF modules and CXF annotations supported by Quarkus CXF. 3.1. Quarkus CXF The following table shows the Quarkus CXF extensions. Click the extension names to learn more about how to configure and use them, and about any known limitations. Quarkus CXF extension Support level Since Supported standards Quarkus CXF quarkus-cxf Stable 0.1.0 JAX-WS , JAXB , WS-Addressing , WS-Policy , MTOM Quarkus CXF Metrics Feature quarkus-cxf-rt-features-metrics Stable 0.14.0 Quarkus CXF OpenTelemetry quarkus-cxf-integration-tracing-opentelemetry Stable 2.7.0 Quarkus CXF WS-Security quarkus-cxf-rt-ws-security Stable 0.14.0 WS-Security , WS-SecurityPolicy Quarkus CXF WS-ReliableMessaging quarkus-cxf-rt-ws-rm Stable 1.5.3 WS-ReliableMessaging Quarkus CXF Security Token Service (STS) quarkus-cxf-services-sts Stable 1.5.3 WS-Trust Quarkus CXF HTTP Async Transport quarkus-cxf-rt-transports-http-hc5 Stable 1.1.0 Quarkus CXF XJC Plugins quarkus-cxf-xjc-plugins Stable 1.5.11 3.2. Supported CXF modules Here is a list of CXF modules supported by Quarkus CXF. You should typically not depend on these directly, but rather use some of the extensions listed above that brings the given CXF module as a transitive dependency. 3.2.1. Front ends Out of CXF front ends only the JAX-WS front end is fully supported by quarkus-cxf . The Simple front end may work in JVM mode, but it is not tested properly. We advise not to use it. 3.2.2. Data Bindings Out of CXF Data Bindings only the following ones are supported: JAXB MTOM Attachments with JAXB 3.2.3. Transports Out of CXF Transports only the following ones are supported: quarkus-cxf implements its own custom transport based on Quarkus and Vert.x for serving SOAP endpoints HTTP client via quarkus-cxf , including Basic Authentication Asynchronous Client HTTP Transport via quarkus-cxf-rt-transports-http-hc5 3.2.4. Tools wsdl2Java - see the Generate the Model classes from WSDL section of User guide java2ws - see the Generate WSDL from Java section of User guide 3.2.5. Supported SOAP Bindings All CXF WSDL Bindings are supported. In order to switch to SOAP 1.2 or to add MTOM, set quarkus.cxf.[client|endpoint]."name".soap-binding to one of the following values: Binding Property Value SOAP 1.1 (default) http://schemas.xmlsoap.org/wsdl/soap/http SOAP 1.2 http://www.w3.org/2003/05/soap/bindings/HTTP/ SOAP 1.1 with MTOM http://schemas.xmlsoap.org/wsdl/soap/http?mtom=true SOAP 1.2 with MTOM http://www.w3.org/2003/05/soap/bindings/HTTP/?mtom=true 3.3. Unsupported CXF modules Here is a list of CXF modules currently not supported by Quarkus CXF along with possible alternatives and/or reasons why the given module is not supported. CXF module Alternative JAX-RS cxf-rt-frontend-jaxrs cxf-rt-rs-client Use Quarkus RESTEasy Fediz Use Quarkus OpenID Connect Aegis Use JAXB and JAX-WS DOSGI Karaf JiBX Use JAXB and JAX-WS Local transport cxf-rt-transports-local Use HTTP transport JMS transport cxf-rt-transports-jms Use HTTP transport JBI cxf-rt-transports-jbi cxf-rt-bindings-jbi Deprecated in CXF use HTTP transport UDP transport cxf-rt-transports-udp Use HTTP transport Coloc transport Use HTTP transport WebSocket transport cxf-rt-transports-websocket Use HTTP transport Clustering cxf-rt-features-clustering Planned CORBA cxf-rt-bindings-corba Use JAX-WS SDO databinding cxf-rt-databinding-sdo XMLBeans Deprecated in CXF Javascript frontend Use JAX-WS JCA transport Use HTTP transport WS-Transfer runtime cxf-rt-ws-transfer Throttling cxf-rt-features-throttling Use load balancer 3.4. Supported CXF annotations Here is the status of CXF annotations on Quarkus. Unless stated otherwise, the support is available via io.quarkiverse.cxf:quarkus-cxf . Annotation Status @org.apache.cxf.feature.Features Supported @org.apache.cxf.interceptor.InInterceptors Supported @org.apache.cxf.interceptor.OutInterceptors Supported @org.apache.cxf.interceptor.OutFaultInterceptors Supported @org.apache.cxf.interceptor.InFaultInterceptors Supported @org.apache.cxf.annotations.WSDLDocumentation Supported @org.apache.cxf.annotations.WSDLDocumentationCollection Supported @org.apache.cxf.annotations.SchemaValidation Supported @org.apache.cxf.annotations.DataBinding Only the default value org.apache.cxf.jaxb.JAXBDataBinding is supported @org.apache.cxf.ext.logging.Logging Supported @org.apache.cxf.annotations.GZIP Supported @org.apache.cxf.annotations.FastInfoset Supported via com.sun.xml.fastinfoset:FastInfoset dependency @org.apache.cxf.annotations.EndpointProperty Supported @org.apache.cxf.annotations.EndpointProperties Supported @org.apache.cxf.annotations.Policy Supported @org.apache.cxf.annotations.Policies Supported @org.apache.cxf.annotations.UseAsyncMethod Supported | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_quarkus_reference/quarkus-cxf-overview-quarkus-cxf-overview |
function::caller_addr | function::caller_addr Name function::caller_addr - Return caller address Synopsis Arguments None Description This function returns the address of the calling function. | [
"caller_addr:long()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-caller-addr |
Chapter 15. Integrating with image vulnerability scanners | Chapter 15. Integrating with image vulnerability scanners Red Hat Advanced Cluster Security for Kubernetes (RHACS) integrates with vulnerability scanners to enable you to import your container images and watch them for vulnerabilities. Supported container image registries Red Hat supports the following container image registries: Amazon Elastic Container Registry (ECR) Generic Docker registries (any generic Docker or Open Container Initiative-compliant image registries, for example, DockerHub, gcr.io , mcr.microsoft.com ) Google Container Registry Google Artifact Registry IBM Cloud Container Registry JFrog Artifactory Microsoft Azure Container Registry (ACR) Red Hat Quay Red Hat registry ( registry.redhat.io , registry.access.redhat.com ) Sonatype Nexus This enhanced support gives you greater flexibility and choice in managing your container images in your preferred registry. Supported Scanners You can set up RHACS to obtain image vulnerability data from the following commercial container image vulnerability scanners: Scanners included in RHACS Scanner V4: Beginning with RHACS version 4.4, a new scanner is introduced that is built on ClairCore , which also powers the Clair scanner. Scanner V4 supports scanning of language and OS-specific image components. You do not have to create an integration to use this scanner, but you must enable it during or after installation. For version 4.4, if you enable this scanner, you must also enable the StackRox Scanner. For more information about Scanner V4, including links to the installation documentation, see About RHACS Scanner V4 . StackRox Scanner: This scanner is the default scanner in RHACS. It originates from a fork of the Clair v2 open source scanner. Important Even if you have Scanner V4 enabled, at this time, the StackRox Scanner must still be enabled to provide scanning of RHCOS nodes and platform vulnerabilities such as Red Hat OpenShift, Kubernetes, and Istio. Support for that functionality in Scanner V4 is planned for a future release. Do not disable the StackRox Scanner. Alternative scanners Clair : As of version 4.4, you can enable Scanner V4 in RHACS to provide functionality provided by ClairCore, which also powers the Clair V4 scanner. However, you can configure Clair V4 as the scanner by configuring an integration. Google Container Analysis Red Hat Quay Important The StackRox Scanner, in conjunction with Scanner V4 (optional), is the preferred image vulnerability scanner to use with RHACS. For more information about scanning container images with the StackRox Scanner and Scanner V4, see Scanning images . If you use one of these alternative scanners in your DevOps workflow, you can use the RHACS portal to configure an integration with your vulnerability scanner. After the integration, the RHACS portal shows the image vulnerabilities and you can triage them easily. If multiple scanners are configured, RHACS tries to use the non-StackRox/RHACS and Clair scanners. If those scanners fail, RHACS tries to use a configured Clair scanner. If that fails, RHACS tries to use Scanner V4, if configured. If Scanner V4 is not configured, RHACS tries to use the StackRox Scanner. 15.1. Integrating with Clair Beginning with version 4.4, Clair scanning features are available in the new RHACS scanner, Scanner V4, and do not require a separate integration. The instructions in this section are only required if you are using the Clair V4 scanner. Note the following guidance: Starting with RHACS 3.74, Red Hat deprecated the CoreOS Clair integration in favor of Clair V4 integration. A separate integration was required to use the Clair V4 Scanner. Beginning with version 4.4, this integration is no longer required if you are using Scanner V4. There is no planned support for the JWT-based authentication option for Clair V4 integration in the RHACS 4.0 version. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Clair v4 . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the scanner. (Optional) If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . (Optional) Click Test to test that the integration with the selected registry is working. Click Save . 15.2. Integrating with Google Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Google Container Registry (GCR) for container analysis and vulnerability scanning. Prerequisites You must have a service account key for the Google Container Registry. The associated service account has access to the registry. See Configuring access control for information about granting users and other projects access to GCR. If you are using GCR Container Analysis , you have granted the following roles to the service account: Container Analysis Notes Viewer Container Analysis Occurrences Viewer Storage Object Viewer Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Google Container Registry . The Configure image integration modal box opens. Click New Integration . Enter the details for the following fields: Integration Name : The name of the integration. Types : Select Scanner . Registry Endpoint : The address of the registry. Project : The Google Cloud project name. Service account key (JSON) Your service account key for authentication. Select Test ( checkmark icon) to test that the integration with the selected registry is working. Select Create ( save icon) to create the configuration. 15.3. Integrating with Quay Container Registry to scan images You can integrate Red Hat Advanced Cluster Security for Kubernetes with Quay Container Registry for scanning images. Prerequisites You must have an OAuth token for authentication with the Quay Container Registry to scan images. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Red Hat Quay.io . Click New integration . Enter the Integration name. Under Type , select Scanner . (If you are also integrating with the registry, select Scanner + Registry .) Enter information in the following fields: Endpoint : Enter the address of the registry. OAuth token : Enter the OAuth token that RHACS uses to authenticate by using the API. Optional: Robot username : If you are configuring Scanner + Registry and are accessing the registry by using a Quay robot account, enter the user name in the format <namespace>+<accountname> . Optional: Robot password : If you are configuring Scanner + Registry and are accessing the registry by using a Quay robot account, enter the password for the robot account user name. Optional: If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . Optional: To create the integration without testing, select Create integration without testing . Select Save . Note If you are editing a Quay integration but do not want to update your credentials, verify that Update stored credentials is not selected. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrate-with-image-vulnerability-scanners |
Index | Index A acl mount option, Mounting a File System adding journals to a file system, Adding Journals to a File System atime, configuring updates, Configuring atime Updates mounting with noatime , Mount with noatime mounting with relatime , Mount with relatime audience, Audience B bind mount mount order, Bind Mounts and File System Mount Order bind mounts, Bind Mounts and Context-Dependent Path Names C Configuration considerations, GFS2 Configuration and Operational Considerations configuration, before, Before Setting Up GFS2 configuration, initial, Getting Started prerequisite tasks, Prerequisite Tasks Context-Dependent Path Names (CDPNs) GFS to GFS2 Conversion, Conversion of Context-Dependent Path Names D data journaling, Data Journaling debugfs, GFS2 tracepoints and the debugfs glocks File debugfs file, Troubleshooting GFS2 Performance with the GFS2 Lock Dump disk quotas additional resources, References assigning per group, Assigning Quotas per Group assigning per user, Assigning Quotas per User enabling, Configuring Disk Quotas creating quota files, Creating the Quota Database Files quotacheck, running, Creating the Quota Database Files hard limit, Assigning Quotas per User management of, Managing Disk Quotas quotacheck command, using to check, Keeping Quotas Accurate reporting, Managing Disk Quotas soft limit, Assigning Quotas per User F features, new and changed, New and Changed Features feedback contact information for this manual, We Need Feedback! file system adding journals, Adding Journals to a File System atime, configuring updates, Configuring atime Updates mounting with noatime , Mount with noatime mounting with relatime , Mount with relatime bind mounts, Bind Mounts and Context-Dependent Path Names context-dependent path names (CDPNs), Bind Mounts and Context-Dependent Path Names data journaling, Data Journaling growing, Growing a File System making, Making a File System mount order, Bind Mounts and File System Mount Order mounting, Mounting a File System , Special Considerations when Mounting GFS2 File Systems quota management, GFS2 Quota Management , Setting Up Quotas in Enforcement or Accounting Mode , GFS2 Quota Management with the gfs2_quota Command displaying quota limits, Displaying Quota Limits and Usage with the gfs2_quota Command enabling quota accounting, Enabling Quota Accounting enabling/disabling quota enforcement, Enabling/Disabling Quota Enforcement setting quotas, Setting Quotas with the gfs2_quota command synchronizing quotas, Synchronizing Quotas with the quotasync Command , Synchronizing Quotas with the gfs2_quota Command repairing, Repairing a File System suspending activity, Suspending Activity on a File System unmounting, Unmounting a File System , Special Considerations when Mounting GFS2 File Systems fsck.gfs2 command, Repairing a File System G GFS2 atime, configuring updates, Configuring atime Updates mounting with noatime , Mount with noatime mounting with relatime , Mount with relatime Configuration considerations, GFS2 Configuration and Operational Considerations managing, Managing GFS2 Operation, GFS2 Configuration and Operational Considerations quota management, GFS2 Quota Management , Setting Up Quotas in Enforcement or Accounting Mode , GFS2 Quota Management with the gfs2_quota Command displaying quota limits, Displaying Quota Limits and Usage with the gfs2_quota Command enabling quota accounting, Enabling Quota Accounting enabling/disabling quota enforcement, Enabling/Disabling Quota Enforcement setting quotas, Setting Quotas with the gfs2_quota command synchronizing quotas, Synchronizing Quotas with the quotasync Command , Synchronizing Quotas with the gfs2_quota Command withdraw function, The GFS2 Withdraw Function GFS2 file system maximum size, GFS2 Overview GFS2-specific options for adding journals table, Complete Usage GFS2-specific options for expanding file systems table, Complete Usage gfs2_grow command, Growing a File System gfs2_jadd command, Adding Journals to a File System gfs2_quota command, GFS2 Quota Management with the gfs2_quota Command glock, GFS2 tracepoints and the debugfs glocks File glock flags, Troubleshooting GFS2 Performance with the GFS2 Lock Dump , The glock debugfs Interface glock holder flags, Troubleshooting GFS2 Performance with the GFS2 Lock Dump , Glock Holders glock types, Troubleshooting GFS2 Performance with the GFS2 Lock Dump , The glock debugfs Interface growing a file system, Growing a File System I initial tasks setup, initial, Initial Setup Tasks introduction, Introduction audience, Audience M making a file system, Making a File System managing GFS2, Managing GFS2 maximum size, GFS2 file system, GFS2 Overview mkfs command, Making a File System mkfs.gfs2 command options table, Complete Options mount command, Mounting a File System mount table, Complete Usage mounting a file system, Mounting a File System , Special Considerations when Mounting GFS2 File Systems N node locking, GFS2 Node Locking O overview, GFS2 Overview configuration, before, Before Setting Up GFS2 features, new and changed, New and Changed Features P path names, context-dependent (CDPNs), Bind Mounts and Context-Dependent Path Names performance tuning, Performance Tuning With GFS2 Posix locking, Issues with Posix Locking preface (see introduction) prerequisite tasks configuration, initial, Prerequisite Tasks Q quota management, GFS2 Quota Management , Setting Up Quotas in Enforcement or Accounting Mode , GFS2 Quota Management with the gfs2_quota Command displaying quota limits, Displaying Quota Limits and Usage with the gfs2_quota Command enabling quota accounting, Enabling Quota Accounting enabling/disabling quota enforcement, Enabling/Disabling Quota Enforcement setting quotas, Setting Quotas with the gfs2_quota command synchronizing quotas, Synchronizing Quotas with the quotasync Command , Synchronizing Quotas with the gfs2_quota Command quota= mount option, Setting Quotas with the gfs2_quota command quotacheck , Creating the Quota Database Files quotacheck command checking quota accuracy with, Keeping Quotas Accurate quota_quantum tunable parameter, Synchronizing Quotas with the quotasync Command , Synchronizing Quotas with the gfs2_quota Command R repairing a file system, Repairing a File System S setup, initial initial tasks, Initial Setup Tasks suspending activity on a file system, Suspending Activity on a File System system hang at unmount, Special Considerations when Mounting GFS2 File Systems T tables GFS2-specific options for adding journals, Complete Usage GFS2-specific options for expanding file systems, Complete Usage mkfs.gfs2 command options, Complete Options mount options, Complete Usage tracepoints, GFS2 tracepoints and the debugfs glocks File tuning, performance, Performance Tuning With GFS2 U umount command, Unmounting a File System unmount, system hang, Special Considerations when Mounting GFS2 File Systems unmounting a file system, Unmounting a File System , Special Considerations when Mounting GFS2 File Systems W withdraw function, GFS2, The GFS2 Withdraw Function | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/ix01 |
Chapter 2. Backing up storage data from Google Persistent Disk | Chapter 2. Backing up storage data from Google Persistent Disk Red Hat recommends that you back up the data on your persistent volume claims (PVCs) regularly. Backing up your data is particularly important before deleting a user and before uninstalling OpenShift AI, as all PVCs are deleted when OpenShift AI is uninstalled. Prerequisites You have credentials for OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). You have administrator access to the OpenShift Dedicated cluster. You have credentials for the Google Cloud Platform (GCP) account that the OpenShift Dedicated cluster is deployed under. Procedure Determine the IDs of the persistent volumes (PVs) that you want to back up. In the OpenShift Dedicated web console, change into the Administrator perspective. Click Home Projects . Click the rhods-notebooks project. The Details page for the project opens. Click the PersistentVolumeClaims in the Inventory section. The PersistentVolumeClaims page opens. Note the ID of the persistent volume (PV) that you want to back up. The persistent volume (PV) IDs are required to identify the correct persistent disk to back up in your GCP instance. Locate the persistent disk containing the PVs that you want to back up. Log in to the Google Cloud console ( https://console.cloud.google.com ) and ensure that you are viewing the region that your OpenShift Dedicated cluster is deployed in. Click the navigation menu (≡) and then click Compute Engine . From the side navigation, under Storage , click Disks . The Disks page opens. In the Filter query box, enter the ID of the persistent volume (PV) that you made a note of earlier. The Disks page reloads to display the search results. Click on the disk shown and verify that any kubernetes.io/created-for/pvc/namespace tags contain the value rhods-notebooks , and any kubernetes.io/created-for/pvc/name tags match the name of the persistent volume that the persistent disk is being used for, for example, jupyterhub-nb-user1-pvc . Back up the persistent disk that contains your persistent volume (PV). Select CREATE SNAPSHOT from the top navigation. The Create a snapshot page opens. Enter a unique Name for the snapshot. Under Source disk , verify the persistent disk you want to back up is displayed. Change any optional settings as needed. Click CREATE . The snapshot of the persistent disk is created. Verification The snapshot that you created is visible on the Snapshots page in GCP. Additional resources Google Cloud documentation: Create and manage disk snapshots | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/uninstalling_openshift_ai_cloud_service/backing-up-storage-data-from-google-persistent-disk_install |
Chapter 1. Software management tools in Red Hat Enterprise Linux 8 | Chapter 1. Software management tools in Red Hat Enterprise Linux 8 In Red Hat Enterprise Linux (RHEL) 8, use YUM to manage software. YUM is based on the DNF technology, which adds support for modular features. Note Upstream documentation identifies the technology as DNF , and the tool is referred to as DNF . As a result, some output returned by the new YUM tool in RHEL 8 mentions DNF . Although YUM is based on DNF , it is compatible with YUM used in RHEL 7. For software installation, the yum command and most of its options work the same way in RHEL 8 as they did in RHEL 7. Selected YUM plug-ins and utilities have been ported to the new DNF back end and can be installed under the same names as in RHEL 7. Packages also provide compatibility symlinks. Therefore, you can find binaries, configuration files, and directories in usual locations. Note The legacy Python API provided by YUM in RHEL 7 is no longer available. You can migrate your plug-ins and scripts to the new DNF Python API provided by YUM in RHEL 8. For more information, see DNF API Reference . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_managing_and_removing_user-space_components/package-management-using-yum-in-rhel-8_using-appstream |
Web console | Web console OpenShift Container Platform 4.9 Getting started with the web console in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/web_console/index |
Chapter 5. Configuring the JAVA_HOME environment variable on RHEL | Chapter 5. Configuring the JAVA_HOME environment variable on RHEL Some applications require you to set the JAVA_HOME environment variable so that they can find the Red Hat build of OpenJDK installation. Prerequisites You know where you installed Red Hat build of OpenJDK on your system. For example, /opt/jdk/11 . Procedure Set the value of JAVA_HOME . Verify that JAVA_HOME is set correctly. Note You can make the value of JAVA_HOME persistent by exporting the environment variable in ~/.bashrc for single users or /etc/bashrc for system-wide settings. Persistent means that if you close your terminal or reboot your computer, you do not need to reset a value for the JAVA_HOME environment variable. The following example demonstrates using a text editor to enter commands for exporting JAVA_HOME in ~/.bashrc for a single user: Additional resources Be aware of the exact meaning of JAVA_HOME . For more information, see Changes/Decouple system java setting from java command setting . | [
"export JAVA_HOME=/opt/jdk/11",
"printenv | grep JAVA_HOME JAVA_HOME=/opt/jdk/11",
"> vi ~/.bash_profile export JAVA_HOME=/opt/jdk/11 export PATH=\"USDJAVA_HOME/bin:USDPATH\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/configuring_red_hat_build_of_openjdk_17_on_rhel/configuring-javahome-environment-variable-on-rhel |
Developing solvers with Red Hat build of OptaPlanner in Red Hat Decision Manager | Developing solvers with Red Hat build of OptaPlanner in Red Hat Decision Manager Red Hat Decision Manager 7.13 | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/index |
7.279. xorg-x11-drv-ati | 7.279. xorg-x11-drv-ati 7.279.1. RHBA-2013:0302 - xorg-x11-drv-ati bug fix and enhancement update Updated xorg-x11-drv-ati packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-ati packages provide a driver for ATI graphics cards for the X.Org implementation of the X Window System. Note The xorg-x11-drv-ati packages have been upgraded to upstream version 6.99.99, which provides a number of bug fixes and enhancements over the version. (BZ#835218) All users of xorg-x11-drv-ati are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/xorg-x11-drv-ati |
Chapter 8. Working with clusters | Chapter 8. Working with clusters 8.1. Viewing system event information in an OpenShift Container Platform cluster Events in OpenShift Container Platform are modeled based on events that happen to API objects in an OpenShift Container Platform cluster. 8.1.1. Understanding events Events allow OpenShift Container Platform to record information about real-world events in a resource-agnostic manner. They also allow developers and administrators to consume information about system components in a unified way. 8.1.2. Viewing events using the CLI You can get a list of events in a given project using the CLI. Procedure To view events in a project use the following command: USD oc get events [-n <project>] 1 1 The name of the project. For example: USD oc get events -n openshift-config Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image "gcr.io/google_containers/busybox" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image "gcr.io/google_containers/busybox" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network "openshift-sdn": cannot set "openshift-sdn" ifname to "eth0": no netns: failed to Statfs "/proc/33366/ns/net": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal To view events in your project from the OpenShift Container Platform console. Launch the OpenShift Container Platform console. Click Home Events and select your project. Move to resource that you want to see events. For example: Home Projects <project-name> <resource-name>. Many objects, such as pods and deployments, have their own Events tab as well, which shows events related to that object. 8.1.3. List of events This section describes the events of OpenShift Container Platform. Table 8.1. Configuration events Name Description FailedValidation Failed pod configuration validation. Table 8.2. Container events Name Description BackOff Back-off restarting failed the container. Created Container created. Failed Pull/Create/Start failed. Killing Killing the container. Started Container started. Preempting Preempting other pods. ExceededGracePeriod Container runtime did not stop the pod within specified grace period. Table 8.3. Health events Name Description Unhealthy Container is unhealthy. Table 8.4. Image events Name Description BackOff Back off Ctr Start, image pull. ErrImageNeverPull The image's NeverPull Policy is violated. Failed Failed to pull the image. InspectFailed Failed to inspect the image. Pulled Successfully pulled the image or the container image is already present on the machine. Pulling Pulling the image. Table 8.5. Image Manager events Name Description FreeDiskSpaceFailed Free disk space failed. InvalidDiskCapacity Invalid disk capacity. Table 8.6. Node events Name Description FailedMount Volume mount failed. HostNetworkNotSupported Host network not supported. HostPortConflict Host/port conflict. KubeletSetupFailed Kubelet setup failed. NilShaper Undefined shaper. NodeNotReady Node is not ready. NodeNotSchedulable Node is not schedulable. NodeReady Node is ready. NodeSchedulable Node is schedulable. NodeSelectorMismatching Node selector mismatch. OutOfDisk Out of disk. Rebooted Node rebooted. Starting Starting kubelet. FailedAttachVolume Failed to attach volume. FailedDetachVolume Failed to detach volume. VolumeResizeFailed Failed to expand/reduce volume. VolumeResizeSuccessful Successfully expanded/reduced volume. FileSystemResizeFailed Failed to expand/reduce file system. FileSystemResizeSuccessful Successfully expanded/reduced file system. FailedUnMount Failed to unmount volume. FailedMapVolume Failed to map a volume. FailedUnmapDevice Failed unmaped device. AlreadyMountedVolume Volume is already mounted. SuccessfulDetachVolume Volume is successfully detached. SuccessfulMountVolume Volume is successfully mounted. SuccessfulUnMountVolume Volume is successfully unmounted. ContainerGCFailed Container garbage collection failed. ImageGCFailed Image garbage collection failed. FailedNodeAllocatableEnforcement Failed to enforce System Reserved Cgroup limit. NodeAllocatableEnforced Enforced System Reserved Cgroup limit. UnsupportedMountOption Unsupported mount option. SandboxChanged Pod sandbox changed. FailedCreatePodSandBox Failed to create pod sandbox. FailedPodSandBoxStatus Failed pod sandbox status. Table 8.7. Pod worker events Name Description FailedSync Pod sync failed. Table 8.8. System Events Name Description SystemOOM There is an OOM (out of memory) situation on the cluster. Table 8.9. Pod events Name Description FailedKillPod Failed to stop a pod. FailedCreatePodContainer Failed to create a pod container. Failed Failed to make pod data directories. NetworkNotReady Network is not ready. FailedCreate Error creating: <error-msg> . SuccessfulCreate Created pod: <pod-name> . FailedDelete Error deleting: <error-msg> . SuccessfulDelete Deleted pod: <pod-id> . Table 8.10. Horizontal Pod AutoScaler events Name Description SelectorRequired Selector is required. InvalidSelector Could not convert selector into a corresponding internal selector object. FailedGetObjectMetric HPA was unable to compute the replica count. InvalidMetricSourceType Unknown metric source type. ValidMetricFound HPA was able to successfully calculate a replica count. FailedConvertHPA Failed to convert the given HPA. FailedGetScale HPA controller was unable to get the target's current scale. SucceededGetScale HPA controller was able to get the target's current scale. FailedComputeMetricsReplicas Failed to compute desired number of replicas based on listed metrics. FailedRescale New size: <size> ; reason: <msg> ; error: <error-msg> . SuccessfulRescale New size: <size> ; reason: <msg> . FailedUpdateStatus Failed to update status. Table 8.11. Network events (openshift-sdn) Name Description Starting Starting OpenShift SDN. NetworkFailed The pod's network interface has been lost and the pod will be stopped. Table 8.12. Network events (kube-proxy) Name Description NeedPods The service-port <serviceName>:<port> needs pods. Table 8.13. Volume events Name Description FailedBinding There are no persistent volumes available and no storage class is set. VolumeMismatch Volume size or class is different from what is requested in claim. VolumeFailedRecycle Error creating recycler pod. VolumeRecycled Occurs when volume is recycled. RecyclerPod Occurs when pod is recycled. VolumeDelete Occurs when volume is deleted. VolumeFailedDelete Error when deleting the volume. ExternalProvisioning Occurs when volume for the claim is provisioned either manually or via external software. ProvisioningFailed Failed to provision volume. ProvisioningCleanupFailed Error cleaning provisioned volume. ProvisioningSucceeded Occurs when the volume is provisioned successfully. WaitForFirstConsumer Delay binding until pod scheduling. Table 8.14. Lifecycle hooks Name Description FailedPostStartHook Handler failed for pod start. FailedPreStopHook Handler failed for pre-stop. UnfinishedPreStopHook Pre-stop hook unfinished. Table 8.15. Deployments Name Description DeploymentCancellationFailed Failed to cancel deployment. DeploymentCancelled Canceled deployment. DeploymentCreated Created new replication controller. IngressIPRangeFull No available Ingress IP to allocate to service. Table 8.16. Scheduler events Name Description FailedScheduling Failed to schedule pod: <pod-namespace>/<pod-name> . This event is raised for multiple reasons, for example: AssumePodVolumes failed, Binding rejected etc. Preempted By <preemptor-namespace>/<preemptor-name> on node <node-name> . Scheduled Successfully assigned <pod-name> to <node-name> . Table 8.17. Daemon set events Name Description SelectingAll This daemon set is selecting all pods. A non-empty selector is required. FailedPlacement Failed to place pod on <node-name> . FailedDaemonPod Found failed daemon pod <pod-name> on node <node-name> , will try to kill it. Table 8.18. LoadBalancer service events Name Description CreatingLoadBalancerFailed Error creating load balancer. DeletingLoadBalancer Deleting load balancer. EnsuringLoadBalancer Ensuring load balancer. EnsuredLoadBalancer Ensured load balancer. UnAvailableLoadBalancer There are no available nodes for LoadBalancer service. LoadBalancerSourceRanges Lists the new LoadBalancerSourceRanges . For example, <old-source-range> <new-source-range> . LoadbalancerIP Lists the new IP address. For example, <old-ip> <new-ip> . ExternalIP Lists external IP address. For example, Added: <external-ip> . UID Lists the new UID. For example, <old-service-uid> <new-service-uid> . ExternalTrafficPolicy Lists the new ExternalTrafficPolicy . For example, <old-policy> <new-policy> . HealthCheckNodePort Lists the new HealthCheckNodePort . For example, <old-node-port> new-node-port> . UpdatedLoadBalancer Updated load balancer with new hosts. LoadBalancerUpdateFailed Error updating load balancer with new hosts. DeletingLoadBalancer Deleting load balancer. DeletingLoadBalancerFailed Error deleting load balancer. DeletedLoadBalancer Deleted load balancer. 8.2. Estimating the number of pods your OpenShift Container Platform nodes can hold As a cluster administrator, you can use the OpenShift Cluster Capacity Tool to view the number of pods that can be scheduled to increase the current resources before they become exhausted, and to ensure any future pods can be scheduled. This capacity comes from an individual node host in a cluster, and includes CPU, memory, disk space, and others. 8.2.1. Understanding the OpenShift Cluster Capacity Tool The OpenShift Cluster Capacity Tool simulates a sequence of scheduling decisions to determine how many instances of an input pod can be scheduled on the cluster before it is exhausted of resources to provide a more accurate estimation. Note The remaining allocatable capacity is a rough estimation, because it does not count all of the resources being distributed among nodes. It analyzes only the remaining resources and estimates the available capacity that is still consumable in terms of a number of instances of a pod with given requirements that can be scheduled in a cluster. Also, pods might only have scheduling support on particular sets of nodes based on its selection and affinity criteria. As a result, the estimation of which remaining pods a cluster can schedule can be difficult. You can run the OpenShift Cluster Capacity Tool as a stand-alone utility from the command line, or as a job in a pod inside an OpenShift Container Platform cluster. Running the tool as job inside of a pod enables you to run it multiple times without intervention. 8.2.2. Running the OpenShift Cluster Capacity Tool on the command line You can run the OpenShift Cluster Capacity Tool from the command line to estimate the number of pods that can be scheduled onto your cluster. You create a sample pod spec file, which the tool uses for estimating resource usage. The pod spec specifies its resource requirements as limits or requests . The cluster capacity tool takes the pod's resource requirements into account for its estimation analysis. Prerequisites Run the OpenShift Cluster Capacity Tool , which is available as a container image from the Red Hat Ecosystem Catalog. Create a sample pod spec file: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi Create the cluster role: USD oc create -f <file_name>.yaml For example: USD oc create -f pod-spec.yaml Procedure To use the cluster capacity tool on the command line: From the terminal, log in to the Red Hat Registry: USD podman login registry.redhat.io Pull the cluster capacity tool image: USD podman pull registry.redhat.io/openshift4/ose-cluster-capacity Run the cluster capacity tool: USD podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity \ /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml \ --verbose where: <pod_spec>.yaml Specifies the pod spec to use. verbose Outputs a detailed description of how many pods can be scheduled on each node in the cluster. Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s) In the above example, the number of estimated pods that can be scheduled onto the cluster is 88. 8.2.3. Running the OpenShift Cluster Capacity Tool as a job inside a pod Running the OpenShift Cluster Capacity Tool as a job inside of a pod allows you to run the tool multiple times without needing user intervention. You run the OpenShift Cluster Capacity Tool as a job by using a ConfigMap object. Prerequisites Download and install OpenShift Cluster Capacity Tool . Procedure To run the cluster capacity tool: Create the cluster role: Create a YAML file similar to the following: kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [""] resources: ["pods", "nodes", "persistentvolumeclaims", "persistentvolumes", "services", "replicationcontrollers"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets", "statefulsets"] verbs: ["get", "watch", "list"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["get", "watch", "list"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "watch", "list"] Create the cluster role by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create sa cluster-capacity-sa Create the service account: USD oc create sa cluster-capacity-sa -n default Add the role to the service account: USD oc adm policy add-cluster-role-to-user cluster-capacity-role \ system:serviceaccount:<namespace>:cluster-capacity-sa where: <namespace> Specifies the namespace where the pod is located. Define and create the pod spec: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi Create the pod by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create -f pod.yaml Created a config map object by running the following command: USD oc create configmap cluster-capacity-configmap \ --from-file=pod.yaml=pod.yaml The cluster capacity analysis is mounted in a volume using a config map object named cluster-capacity-configmap to mount the input pod spec file pod.yaml into a volume test-volume at the path /test-pod . Create the job using the below example of a job specification file: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: "Always" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: "true" command: - "/bin/sh" - "-ec" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: "Never" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap 1 A required environment variable letting the cluster capacity tool know that it is running inside a cluster as a pod. The pod.yaml key of the ConfigMap object is the same as the Pod spec file name, though it is not required. By doing this, the input pod spec file can be accessed inside the pod as /test-pod/pod.yaml . Run the cluster capacity image as a job in a pod by running the following command: USD oc create -f cluster-capacity-job.yaml Verification Check the job logs to find the number of pods that can be scheduled in the cluster: USD oc logs jobs/cluster-capacity-job Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s) 8.3. Restrict resource consumption with limit ranges By default, containers run with unbounded compute resources on an OpenShift Container Platform cluster. With limit ranges, you can restrict resource consumption for specific objects in a project: pods and containers: You can set minimum and maximum requirements for CPU and memory for pods and their containers. Image streams: You can set limits on the number of images and tags in an ImageStream object. Images: You can limit the size of images that can be pushed to an internal registry. Persistent volume claims (PVC): You can restrict the size of the PVCs that can be requested. If a pod does not meet the constraints imposed by the limit range, the pod cannot be created in the namespace. 8.3.1. About limit ranges A limit range, defined by a LimitRange object, restricts resource consumption in a project. In the project you can set specific resource limits for a pod, container, image, image stream, or persistent volume claim (PVC). All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. The following shows a limit range object for all components: pod, container, image, image stream, or PVC. You can configure limits for any or all of these components in the same object. You create a different limit range object for each project where you want to control resources. Sample limit range object for a container apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" spec: limits: - type: "Container" max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: cpu: "300m" memory: "200Mi" defaultRequest: cpu: "200m" memory: "100Mi" maxLimitRequestRatio: cpu: "10" 8.3.1.1. About component limits The following examples show limit range parameters for each component. The examples are broken out for clarity. You can create a single LimitRange object for any or all components as necessary. 8.3.1.1.1. Container limits A limit range allows you to specify the minimum and maximum CPU and memory that each container in a pod can request for a specific project. If a container is created in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. The container CPU or memory request and limit must be greater than or equal to the min resource constraint for containers that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraint for containers that are specified in the LimitRange object. If the LimitRange object defines a max CPU, you do not need to define a CPU request value in the Pod spec. But you must specify a CPU limit value that satisfies the maximum CPU constraint specified in the limit range. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio value for containers that is specified in the LimitRange object. If the LimitRange object defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. OpenShift Container Platform calculates the limit-to-request ratio by dividing the limit by the request . This value should be a non-negative integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Container LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Container" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "100m" 4 memory: "4Mi" 5 default: cpu: "300m" 6 memory: "200Mi" 7 defaultRequest: cpu: "200m" 8 memory: "100Mi" 9 maxLimitRequestRatio: cpu: "10" 10 1 The name of the LimitRange object. 2 The maximum amount of CPU that a single container in a pod can request. 3 The maximum amount of memory that a single container in a pod can request. 4 The minimum amount of CPU that a single container in a pod can request. 5 The minimum amount of memory that a single container in a pod can request. 6 The default amount of CPU that a container can use if not specified in the Pod spec. 7 The default amount of memory that a container can use if not specified in the Pod spec. 8 The default amount of CPU that a container can request if not specified in the Pod spec. 9 The default amount of memory that a container can request if not specified in the Pod spec. 10 The maximum limit-to-request ratio for a container. 8.3.1.1.2. Pod limits A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers across a pod in a given project. To create a container in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Across all containers in a pod, the following must hold true: The container CPU or memory request and limit must be greater than or equal to the min resource constraints for pods that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraints for pods that are specified in the LimitRange object. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio constraint specified in the LimitRange object. Pod LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 maxLimitRequestRatio: cpu: "10" 6 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request across all containers. 3 The maximum amount of memory that a pod can request across all containers. 4 The minimum amount of CPU that a pod can request across all containers. 5 The minimum amount of memory that a pod can request across all containers. 6 The maximum limit-to-request ratio for a container. 8.3.1.1.3. Image limits A LimitRange object allows you to specify the maximum size of an image that can be pushed to an OpenShift image registry. When pushing images to an OpenShift image registry, the following must hold true: The size of the image must be less than or equal to the max size for images that is specified in the LimitRange object. Image LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2 1 The name of the LimitRange object. 2 The maximum size of an image that can be pushed to an OpenShift image registry. Note To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quotas. Warning The image size is not always available in the manifest of an uploaded image. This is especially the case for images built with Docker 1.10 or higher and pushed to a v2 registry. If such an image is pulled with an older Docker daemon, the image manifest is converted by the registry to schema v1 lacking all the size information. No storage limit set on images prevent it from being uploaded. The issue is being addressed. 8.3.1.1.4. Image stream limits A LimitRange object allows you to specify limits for image streams. For each image stream, the following must hold true: The number of image tags in an ImageStream specification must be less than or equal to the openshift.io/image-tags constraint in the LimitRange object. The number of unique references to images in an ImageStream specification must be less than or equal to the openshift.io/images constraint in the limit range object. Imagestream LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 1 The name of the LimitRange object. 2 The maximum number of unique image tags in the imagestream.spec.tags parameter in imagestream spec. 3 The maximum number of unique image references in the imagestream.status.tags parameter in the imagestream spec. The openshift.io/image-tags resource represents unique image references. Possible references are an ImageStreamTag , an ImageStreamImage and a DockerImage . Tags can be created using the oc tag and oc import-image commands. No distinction is made between internal and external references. However, each unique reference tagged in an ImageStream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names recorded in image stream status. It allows for restriction of a number of images that can be pushed to the OpenShift image registry. Internal and external references are not distinguished. 8.3.1.1.5. Persistent volume claim limits A LimitRange object allows you to restrict the storage requested in a persistent volume claim (PVC). Across all persistent volume claims in a project, the following must hold true: The resource request in a persistent volume claim (PVC) must be greater than or equal the min constraint for PVCs that is specified in the LimitRange object. The resource request in a persistent volume claim (PVC) must be less than or equal the max constraint for PVCs that is specified in the LimitRange object. PVC LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "PersistentVolumeClaim" min: storage: "2Gi" 2 max: storage: "50Gi" 3 1 The name of the LimitRange object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. 8.3.2. Creating a Limit Range To apply a limit range to a project: Create a LimitRange object with your required specifications: apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" 2 max: cpu: "2" memory: "1Gi" min: cpu: "200m" memory: "6Mi" - type: "Container" 3 max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: 4 cpu: "300m" memory: "200Mi" defaultRequest: 5 cpu: "200m" memory: "100Mi" maxLimitRequestRatio: 6 cpu: "10" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: "PersistentVolumeClaim" 9 min: storage: "2Gi" max: storage: "50Gi" 1 Specify a name for the LimitRange object. 2 To set limits for a pod, specify the minimum and maximum CPU and memory requests as needed. 3 To set limits for a container, specify the minimum and maximum CPU and memory requests as needed. 4 Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the Pod spec. 5 Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the Pod spec. 6 Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the Pod spec. 7 To set limits for an Image object, set the maximum size of an image that can be pushed to an OpenShift image registry. 8 To set limits for an image stream, set the maximum number of image tags and references that can be in the ImageStream object file, as needed. 9 To set limits for a persistent volume claim, set the minimum and maximum amount of storage that can be requested. Create the object: USD oc create -f <limit_range_file> -n <project> 1 1 Specify the name of the YAML file you created and the project where you want the limits to apply. 8.3.3. Viewing a limit You can view any limits defined in a project by navigating in the web console to the project's Quota page. You can also use the CLI to view limit range details: Get the list of LimitRange object defined in the project. For example, for a project called demoproject : USD oc get limits -n demoproject NAME CREATED AT resource-limits 2020-07-15T17:14:23Z Describe the LimitRange object you are interested in, for example the resource-limits limit range: USD oc describe limits resource-limits -n demoproject Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - - 8.3.4. Deleting a Limit Range To remove any active LimitRange object to no longer enforce the limits in a project: Run the following command: USD oc delete limits <limit_name> 8.4. Configuring cluster memory to meet container memory and risk requirements As a cluster administrator, you can help your clusters operate efficiently through managing application memory by: Determining the memory and risk requirements of a containerized application component and configuring the container memory parameters to suit those requirements. Configuring containerized application runtimes (for example, OpenJDK) to adhere optimally to the configured container memory parameters. Diagnosing and resolving memory-related error conditions associated with running in a container. 8.4.1. Understanding managing application memory It is recommended to fully read the overview of how OpenShift Container Platform manages Compute Resources before proceeding. For each kind of resource (memory, CPU, storage), OpenShift Container Platform allows optional request and limit values to be placed on each container in a pod. Note the following about memory requests and memory limits: Memory request The memory request value, if specified, influences the OpenShift Container Platform scheduler. The scheduler considers the memory request when scheduling a container to a node, then fences off the requested memory on the chosen node for the use of the container. If a node's memory is exhausted, OpenShift Container Platform prioritizes evicting its containers whose memory usage most exceeds their memory request. In serious cases of memory exhaustion, the node OOM killer may select and kill a process in a container based on a similar metric. The cluster administrator can assign quota or assign default values for the memory request value. The cluster administrator can override the memory request values that a developer specifies, to manage cluster overcommit. Memory limit The memory limit value, if specified, provides a hard limit on the memory that can be allocated across all the processes in a container. If the memory allocated by all of the processes in a container exceeds the memory limit, the node Out of Memory (OOM) killer will immediately select and kill a process in the container. If both memory request and limit are specified, the memory limit value must be greater than or equal to the memory request. The cluster administrator can assign quota or assign default values for the memory limit value. The minimum memory limit is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. 8.4.1.1. Managing application memory strategy The steps for sizing application memory on OpenShift Container Platform are as follows: Determine expected container memory usage Determine expected mean and peak container memory usage, empirically if necessary (for example, by separate load testing). Remember to consider all the processes that may potentially run in parallel in the container: for example, does the main application spawn any ancillary scripts? Determine risk appetite Determine risk appetite for eviction. If the risk appetite is low, the container should request memory according to the expected peak usage plus a percentage safety margin. If the risk appetite is higher, it may be more appropriate to request memory according to the expected mean usage. Set container memory request Set container memory request based on the above. The more accurately the request represents the application memory usage, the better. If the request is too high, cluster and quota usage will be inefficient. If the request is too low, the chances of application eviction increase. Set container memory limit, if required Set container memory limit, if required. Setting a limit has the effect of immediately killing a container process if the combined memory usage of all processes in the container exceeds the limit, and is therefore a mixed blessing. On the one hand, it may make unanticipated excess memory usage obvious early ("fail fast"); on the other hand it also terminates processes abruptly. Note that some OpenShift Container Platform clusters may require a limit value to be set; some may override the request based on the limit; and some application images rely on a limit value being set as this is easier to detect than a request value. If the memory limit is set, it should not be set to less than the expected peak container memory usage plus a percentage safety margin. Ensure application is tuned Ensure application is tuned with respect to configured request and limit values, if appropriate. This step is particularly relevant to applications which pool memory, such as the JVM. The rest of this page discusses this. Additional resources Understanding compute resources and containers 8.4.2. Understanding OpenJDK settings for OpenShift Container Platform The default OpenJDK settings do not work well with containerized environments. As a result, some additional Java memory settings must always be provided whenever running the OpenJDK in a container. The JVM memory layout is complex, version dependent, and describing it in detail is beyond the scope of this documentation. However, as a starting point for running OpenJDK in a container, at least the following three memory-related tasks are key: Overriding the JVM maximum heap size. Encouraging the JVM to release unused memory to the operating system, if appropriate. Ensuring all JVM processes within a container are appropriately configured. Optimally tuning JVM workloads for running in a container is beyond the scope of this documentation, and may involve setting multiple additional JVM options. 8.4.2.1. Understanding how to override the JVM maximum heap size For many Java workloads, the JVM heap is the largest single consumer of memory. Currently, the OpenJDK defaults to allowing up to 1/4 (1/ -XX:MaxRAMFraction ) of the compute node's memory to be used for the heap, regardless of whether the OpenJDK is running in a container or not. It is therefore essential to override this behavior, especially if a container memory limit is also set. There are at least two ways the above can be achieved: If the container memory limit is set and the experimental options are supported by the JVM, set -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap . Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This sets -XX:MaxRAM to the container memory limit, and the maximum heap size ( -XX:MaxHeapSize / -Xmx ) to 1/ -XX:MaxRAMFraction (1/4 by default). Directly override one of -XX:MaxRAM , -XX:MaxHeapSize or -Xmx . This option involves hard-coding a value, but has the advantage of allowing a safety margin to be calculated. 8.4.2.2. Understanding how to encourage the JVM to release unused memory to the operating system By default, the OpenJDK does not aggressively return unused memory to the operating system. This may be appropriate for many containerized Java workloads, but notable exceptions include workloads where additional active processes co-exist with a JVM within a container, whether those additional processes are native, additional JVMs, or a combination of the two. Java-based agents can use the following JVM arguments to encourage the JVM to release unused memory to the operating system: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90. These arguments are intended to return heap memory to the operating system whenever allocated memory exceeds 110% of in-use memory ( -XX:MaxHeapFreeRatio ), spending up to 20% of CPU time in the garbage collector ( -XX:GCTimeRatio ). At no time will the application heap allocation be less than the initial heap allocation (overridden by -XX:InitialHeapSize / -Xms ). Detailed additional information is available Tuning Java's footprint in OpenShift (Part 1) , Tuning Java's footprint in OpenShift (Part 2) , and at OpenJDK and Containers . 8.4.2.3. Understanding how to ensure all JVM processes within a container are appropriately configured In the case that multiple JVMs run in the same container, it is essential to ensure that they are all configured appropriately. For many workloads it will be necessary to grant each JVM a percentage memory budget, leaving a perhaps substantial additional safety margin. Many Java tools use different environment variables ( JAVA_OPTS , GRADLE_OPTS , and so on) to configure their JVMs and it can be challenging to ensure that the right settings are being passed to the right JVM. The JAVA_TOOL_OPTIONS environment variable is always respected by the OpenJDK, and values specified in JAVA_TOOL_OPTIONS will be overridden by other options specified on the JVM command line. By default, to ensure that these options are used by default for all JVM workloads run in the Java-based agent image, the OpenShift Container Platform Jenkins Maven agent image sets: JAVA_TOOL_OPTIONS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true" Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This does not guarantee that additional options are not required, but is intended to be a helpful starting point. 8.4.3. Finding the memory request and limit from within a pod An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API. Procedure Configure the pod to add the MEMORY_REQUEST and MEMORY_LIMIT stanzas: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - "3600" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi 1 Add this stanza to discover the application memory request value. 2 Add this stanza to discover the application memory limit value. Create the pod by running the following command: USD oc create -f <file-name>.yaml Verification Access the pod using a remote shell: USD oc rsh test Check that the requested values were applied: USD env | grep MEMORY | sort Example output MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184 Note The memory limit value can also be read from inside the container by the /sys/fs/cgroup/memory/memory.limit_in_bytes file. 8.4.4. Understanding OOM kill policy OpenShift Container Platform can kill a process in a container if the total memory usage of all the processes in the container exceeds the memory limit, or in serious cases of node memory exhaustion. When a process is Out of Memory (OOM) killed, this might result in the container exiting immediately. If the container PID 1 process receives the SIGKILL , the container will exit immediately. Otherwise, the container behavior is dependent on the behavior of the other processes. For example, a container process exited with code 137, indicating it received a SIGKILL signal. If the container does not exit immediately, an OOM kill is detectable as follows: Access the pod using a remote shell: # oc rsh test Run the following command to see the current OOM kill count in /sys/fs/cgroup/memory/memory.oom_control : USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 0 Run the following command to provoke an OOM kill: USD sed -e '' </dev/zero Example output Killed Run the following command to view the exit status of the sed command: USD echo USD? Example output 137 The 137 code indicates the container process exited with code 137, indicating it received a SIGKILL signal. Run the following command to see that the OOM kill counter in /sys/fs/cgroup/memory/memory.oom_control incremented: USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 1 If one or more processes in a pod are OOM killed, when the pod subsequently exits, whether immediately or not, it will have phase Failed and reason OOMKilled . An OOM-killed pod might be restarted depending on the value of restartPolicy . If not restarted, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. Use the follwing command to get the pod status: USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m If the pod has not restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed If restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running 8.4.5. Understanding pod eviction OpenShift Container Platform may evict a pod from its node when the node's memory is exhausted. Depending on the extent of memory exhaustion, the eviction may or may not be graceful. Graceful eviction implies the main process (PID 1) of each container receiving a SIGTERM signal, then some time later a SIGKILL signal if the process has not exited already. Non-graceful eviction implies the main process of each container immediately receiving a SIGKILL signal. An evicted pod has phase Failed and reason Evicted . It will not be restarted, regardless of the value of restartPolicy . However, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m USD oc get pod test -o yaml Example output ... status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted 8.5. Configuring your cluster to place pods on overcommitted nodes In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. For example, you might want to use overcommitment in development environments where a trade-off of guaranteed performance for capacity is acceptable. Containers can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that can be consumed on your node. The scheduler attempts to optimize the compute resource use across all nodes in your cluster. It places pods onto specific nodes, taking the pods' compute resource requests and nodes' available capacity into consideration. OpenShift Container Platform administrators can control the level of overcommit and manage container density on developer containers by using the ClusterResourceOverride Operator . Note In OpenShift Container Platform, you must enable cluster-level overcommit. Node overcommitment is enabled by default. See Disabling overcommitment for a node . 8.5.1. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 100% overcommitted. 8.5.2. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. The Operator modifies the ratio between the requests and limits that are set on developer containers. In conjunction with a per-project limit range that specifies limits and defaults, you can achieve the desired level of overcommit. You must install the Cluster Resource Override Operator by using the OpenShift Container Platform console or CLI as shown in the following sections. After you deploy the Cluster Resource Override Operator, the Operator modifies all new pods in specific namespaces. The Operator does not edit pods that existed before you deployed the Operator. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, you can enable overrides on a per-project basis by applying the following label to the Namespace object for each project where you want the overrides to apply. For example, you can configure override so that infrastructure components are not subject to the overrides. apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" # ... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. For example, a pod has the following resources limits: apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: "512Mi" cpu: "2000m" # ... The Cluster Resource Override Operator intercepts the original pod request, then overrides the resources according to the configuration set in the ClusterResourceOverride object. apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: "1" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi # ... 1 The CPU limit has been overridden to 1 because the limitCPUToMemoryPercent parameter is set to 200 in the ClusterResourceOverride object. As such, 200% of the memory limit, 512Mi in CPU terms, is 1 CPU core. 2 The CPU request is now 250m because the cpuRequestToLimit is set to 25 in the ClusterResourceOverride object. As such, 25% of the 1 CPU core is 250m. 8.5.2.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create ClusterResourceOverride . On the Create ClusterResourceOverride page, click YAML view and edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details page, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 8.5.2.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "stable" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 8.5.2.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 # ... 1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 # ... 1 Add this label to each project. 8.5.3. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 8.5.3.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 8.5.3.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 8.5.3.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 8.5.3.2. Understanding overcommitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. A pod is designated as one of three QoS classes with decreasing order of priority: Table 8.19. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the pod is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 8.5.3.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QoS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 8.5.3.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 8.5.3.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 8.5.3.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: cpuCfsQuota: false 3 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Set the cpuCfsQuota parameter to false . Run the following command to create the CR: USD oc create -f <file_name>.yaml 8.5.3.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 8.5.3.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 8.5.4. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 8.5.4.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure To disable overcommitment in a project: Edit the namespace object to add the following annotation: apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: "false" 1 # ... 1 Setting this annotation to false disables overcommit for this namespace. 8.5.5. Additional resources Setting deployment resources . Allocating resources for nodes . 8.6. Enabling Linux control group version 2 (cgroup v2) You can enable Linux control group version 2 (cgroup v2) in your cluster by editing the node.config object. Enabling cgroup v2 in OpenShift Container Platform disables all cgroups version 1 controllers and hierarchies in your cluster. cgroup v1 is enabled by default. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. Note If you run third-party monitoring and security agents that depend on the cgroup file system, update the agents to a version that supports cgroup v2. If you have configured cgroup v2 and run cAdvisor as a stand-alone daemon set for monitoring pods and containers, update cAdvisor to v0.43.0 or later. If you deploy Java applications, use versions that fully support cgroup v2, such as the following packages: OpenJDK / HotSpot: jdk8u372, 11.0.16, 15 and later IBM Semeru Runtimes: jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later IBM SDK Java Technology Edition Version (IBM Java): 8.0.7.15 and later Important OpenShift Container Platform cgroups version 2 support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 8.6.1. Configuring Linux cgroup v2 You enable cgroup v2 by editing the node.config object. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.12 or later. You are logged in to the cluster as a user with administrative privileges. You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. Procedure Enable cgroup v2 on nodes: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.cgroupMode: "v2" : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: "v2" 1 ... 1 Enables cgroup v2. Verification Check the machine configs to see that the new machine configs were added: USD oc get mc Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 3m 1 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 3m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m worker-enable-cgroups-v2 3.2.0 10s 1 New machine configs are created, as expected. Check that the new kernelArguments were added to the new machine configs: USD oc describe mc <name> Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: - systemd_unified_cgroup_hierarchy=1 1 - cgroup_no_v1="all" 2 - psi=1 3 1 Enables cgroup v2 in systemd. 2 Disables cgroups v1. 3 Enables the Linux Pressure Stall Information (PSI) feature. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.25.0 After a node returns to the Ready state, start a debug session for that node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Check that the sys/fs/cgroup/cgroup2fs file is present on your nodes. This file is created by cgroup v2: USD stat -c %T -f /sys/fs/cgroup Example output cgroup2fs Additional resources Enabling OpenShift Container Platform features using FeatureGates OpenShift Container Platform installation overview 8.7. Enabling features using feature gates As an administrator, you can use feature gates to enable features that are not part of the default set of features. 8.7.1. Understanding feature gates You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. You can activate the following feature set by using the FeatureGate CR: TechPreviewNoUpgrade . This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. The following Technology Preview features are enabled by this feature set: CSI automatic migration. Enables automatic migration for supported in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers. Supported for: Azure File ( CSIMigrationAzureFile ) VMware vSphere ( CSIMigrationvSphere ) Shared Resources CSI Driver and Build CSI Volumes in OpenShift Builds. Enables the Container Storage Interface (CSI). ( CSIDriverSharedResource ) CSI volumes. Enables CSI volume support for the OpenShift Container Platform build system. ( BuildCSIVolumes ) Swap memory on nodes. Enables swap memory use for OpenShift Container Platform workloads on a per-node basis. ( NodeSwap ) cgroups v2. Enables cgroup v2, the version of the Linux cgroup API. ( CGroupsV2 ) crun. Enables the crun container runtime. ( Crun ) Insights Operator. Enables the Insights Operator, which gathers OpenShift Container Platform configuration data and sends it to Red Hat. ( InsightsConfigAPI ) External cloud providers. Enables support for external cloud providers for clusters on vSphere, AWS, Azure, and GCP. Support for OpenStack is GA. ( ExternalCloudProvider ) Pod topology spread constraints. Enables the matchLabelKeys parameter for pod topology constraints. The parameter is list of pod label keys to select the pods over which spreading will be calculated. ( MatchLabelKeysInPodTopologySpread ) Pod security admission enforcement. Enables restricted enforcement for pod security admission. Instead of only logging a warning, pods are rejected if they violate pod security standards. ( OpenShiftPodSecurityAdmission ) Note Pod security admission restricted enforcement is only activated if you enable the TechPreviewNoUpgrade feature set after your OpenShift Container Platform cluster is installed. It is not activated if you enable the TechPreviewNoUpgrade feature set during cluster installation. For more information about the features activated by the TechPreviewNoUpgrade feature gate, see the following topics: CSI inline ephemeral volumes CSI automatic migration Using Container Storage Interface (CSI) Source-to-image (S2I) build volumes and Docker build volumes Swap memory on nodes Managing machines with the Cluster API Enabling Linux control group version 2 (cgroup v2) About the container engine and container runtime Using Insights Operator Controlling pod placement by using pod topology spread constraints Pod Security Admission in the Kubernetes documentation and Understanding and managing pod security admission 8.7.2. Enabling feature sets at installation You can enable feature sets for all nodes in the cluster by editing the install-config.yaml file before you deploy the cluster. Prerequisites You have an install-config.yaml file. Procedure Use the featureSet parameter to specify the name of the feature set you want to enable, such as TechPreviewNoUpgrade : Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample install-config.yaml file with an enabled feature set compute: - hyperthreading: Enabled name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 metadataService: authentication: Optional type: c5.4xlarge zones: - us-west-2c replicas: 3 featureSet: TechPreviewNoUpgrade Save the file and reference it when using the installation program to deploy the cluster. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.7.3. Enabling feature sets using the web console You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Procedure To enable feature sets: In the OpenShift Container Platform web console, switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click FeatureGate . On the Custom Resource Definition Details page, click the Instances tab. Click the cluster feature gate, then click the YAML tab. Edit the cluster instance to add specific feature sets: Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample Feature Gate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.7.4. Enabling feature sets using the CLI You can use the OpenShift CLI ( oc ) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To enable feature sets: Edit the FeatureGate CR named cluster : USD oc edit featuregate cluster Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample FeatureGate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.8. Improving cluster stability in high latency environments using worker latency profiles If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator need change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are pre-defined with carefully tuned values to control the reaction of the cluster to increased latency. No need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. 8.8.1. Understanding worker latency profiles Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency , node-monitor-grace-period , default-not-ready-toleration-seconds and default-unreachable-toleration-seconds . These parameters can use values which allow you control the reaction of the cluster to latency issues without needing to determine the best values using manual methods. Important Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability. All worker latency profiles configure the following parameters: node-status-update-frequency Specifies how often the kubelet posts node status to the API server. node-monitor-grace-period Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint to the node. default-not-ready-toleration-seconds Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node. default-unreachable-toleration-seconds Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node. The following Operators monitor the changes to the worker latency profiles and respond accordingly: The Machine Config Operator (MCO) updates the node-status-update-frequency parameter on the worker nodes. The Kubernetes Controller Manager updates the node-monitor-grace-period parameter on the control plane nodes. The Kubernetes API Server Operator updates the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds parameters on the control plane nodes. While the default configuration works in most cases, OpenShift Container Platform offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections: Default worker latency profile With the Default profile, each Kubelet updates it's status every 10 seconds ( node-status-update-frequency ). The Kube Controller Manager checks the statuses of Kubelet every 5 seconds ( node-monitor-grace-period ). The Kubernetes Controller Manager waits 40 seconds for a status update from Kubelet before considering the Kubelet unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint and evicts the pods on that node. If a pod on that node has the NoExecute taint, the pod is run according to tolerationSeconds . If the pod has no taint, it will be evicted in 300 seconds ( default-not-ready-toleration-seconds and default-unreachable-toleration-seconds settings of the Kube API Server ). Profile Component Parameter Value Default kubelet node-status-update-frequency 10s Kubelet Controller Manager node-monitor-grace-period 40s Kubernetes API Server Operator default-not-ready-toleration-seconds 300s Kubernetes API Server Operator default-unreachable-toleration-seconds 300s Medium worker latency profile Use the MediumUpdateAverageReaction profile if the network latency is slightly higher than usual. The MediumUpdateAverageReaction profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value MediumUpdateAverageReaction kubelet node-status-update-frequency 20s Kubelet Controller Manager node-monitor-grace-period 2m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s Low worker latency profile Use the LowUpdateSlowReaction profile if the network latency is extremely high. The LowUpdateSlowReaction profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value LowUpdateSlowReaction kubelet node-status-update-frequency 1m Kubelet Controller Manager node-monitor-grace-period 5m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s 8.8.2. Using and changing worker latency profiles To change a worker latency profile to deal with network latency, edit the node.config object to add the name of the profile. You can change the profile at any time as latency increases or decreases. You must move one worker latency profile at a time. For example, you cannot move directly from the Default profile to the LowUpdateSlowReaction worker latency profile. You must move from the Default worker latency profile to the MediumUpdateAverageReaction profile first, then to LowUpdateSlowReaction . Similarly, when returning to the Default profile, you must move from the low profile to the medium profile first, then to Default . Note You can also configure worker latency profiles upon installing an OpenShift Container Platform cluster. Procedure To move from the default worker latency profile: Move to the medium worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.workerLatencyProfile: MediumUpdateAverageReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1 # ... 1 Specifies the medium worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Optional: Move to the low worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Change the spec.workerLatencyProfile value to LowUpdateSlowReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1 # ... 1 Specifies use of the low worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Verification When all nodes return to the Ready condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied: USD oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5 Example output # ... - lastTransitionTime: "2022-07-11T19:47:10Z" reason: ProfileUpdated status: "False" type: WorkerLatencyProfileProgressing - lastTransitionTime: "2022-07-11T19:47:10Z" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: "True" type: WorkerLatencyProfileComplete - lastTransitionTime: "2022-07-11T19:20:11Z" reason: AsExpected status: "False" type: WorkerLatencyProfileDegraded - lastTransitionTime: "2022-07-11T19:20:36Z" status: "False" # ... 1 Specifies that the profile is applied and active. To change the medium profile to default or change the default to medium, edit the node.config object and set the spec.workerLatencyProfile parameter to the appropriate value. | [
"oc get events [-n <project>] 1",
"oc get events -n openshift-config",
"LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"openshift-sdn\": cannot set \"openshift-sdn\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi",
"oc create -f <file_name>.yaml",
"oc create -f pod-spec.yaml",
"podman login registry.redhat.io",
"podman pull registry.redhat.io/openshift4/ose-cluster-capacity",
"podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]",
"oc create -f <file_name>.yaml",
"oc create sa cluster-capacity-sa",
"oc create sa cluster-capacity-sa -n default",
"oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi",
"oc create -f <file_name>.yaml",
"oc create -f pod.yaml",
"oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml",
"apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap",
"oc create -f cluster-capacity-job.yaml",
"oc logs jobs/cluster-capacity-job",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"",
"oc create -f <limit_range_file> -n <project> 1",
"oc get limits -n demoproject",
"NAME CREATED AT resource-limits 2020-07-15T17:14:23Z",
"oc describe limits resource-limits -n demoproject",
"Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -",
"oc delete limits <limit_name>",
"-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.",
"JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"",
"apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi",
"oc create -f <file-name>.yaml",
"oc rsh test",
"env | grep MEMORY | sort",
"MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184",
"oc rsh test",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 0",
"sed -e '' </dev/zero",
"Killed",
"echo USD?",
"137",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 1",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m",
"oc get pod test -o yaml",
"status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3",
"oc create -f <file_name>.yaml",
"sysctl -w vm.overcommit_memory=0",
"apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" 1",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v2\" 1",
"oc get mc",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 3m 1 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 3m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m worker-enable-cgroups-v2 3.2.0 10s",
"oc describe mc <name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: - systemd_unified_cgroup_hierarchy=1 1 - cgroup_no_v1=\"all\" 2 - psi=1 3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.25.0",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"stat -c %T -f /sys/fs/cgroup",
"cgroup2fs",
"compute: - hyperthreading: Enabled name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 metadataService: authentication: Optional type: c5.4xlarge zones: - us-west-2c replicas: 3 featureSet: TechPreviewNoUpgrade",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit featuregate cluster",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1",
"oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5",
"- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/nodes/working-with-clusters |
Operators | Operators OpenShift Container Platform 4.17 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operators/index |
Chapter 12. Verifying connectivity to an endpoint | Chapter 12. Verifying connectivity to an endpoint The Cluster Network Operator (CNO) runs a controller, the connectivity check controller, that performs a connection health check between resources within your cluster. By reviewing the results of the health checks, you can diagnose connection problems or eliminate network connectivity as the cause of an issue that you are investigating. 12.1. Connection health checks performed To verify that cluster resources are reachable, a TCP connection is made to each of the following cluster API services: Kubernetes API server service Kubernetes API server endpoints OpenShift API server service OpenShift API server endpoints Load balancers To verify that services and service endpoints are reachable on every node in the cluster, a TCP connection is made to each of the following targets: Health check target service Health check target endpoints 12.2. Implementation of connection health checks The connectivity check controller orchestrates connection verification checks in your cluster. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel. The Cluster Network Operator (CNO) deploys several resources to the cluster to send and receive connectivity health checks: Health check source This program deploys in a single pod replica set managed by a Deployment object. The program consumes PodNetworkConnectivity objects and connects to the spec.targetEndpoint specified in each object. Health check target A pod deployed as part of a daemon set on every node in the cluster. The pod listens for inbound health checks. The presence of this pod on every node allows for the testing of connectivity to each node. 12.3. PodNetworkConnectivityCheck object fields The PodNetworkConnectivityCheck object fields are described in the following tables. Table 12.1. PodNetworkConnectivityCheck object fields Field Type Description metadata.name string The name of the object in the following format: <source>-to-<target> . The destination described by <target> includes one of following strings: load-balancer-api-external load-balancer-api-internal kubernetes-apiserver-endpoint kubernetes-apiserver-service-cluster network-check-target openshift-apiserver-endpoint openshift-apiserver-service-cluster metadata.namespace string The namespace that the object is associated with. This value is always openshift-network-diagnostics . spec.sourcePod string The name of the pod where the connection check originates, such as network-check-source-596b4c6566-rgh92 . spec.targetEndpoint string The target of the connection check, such as api.devcluster.example.com:6443 . spec.tlsClientCert object Configuration for the TLS certificate to use. spec.tlsClientCert.name string The name of the TLS certificate used, if any. The default value is an empty string. status object An object representing the condition of the connection test and logs of recent connection successes and failures. status.conditions array The latest status of the connection check and any statuses. status.failures array Connection test logs from unsuccessful attempts. status.outages array Connect test logs covering the time periods of any outages. status.successes array Connection test logs from successful attempts. The following table describes the fields for objects in the status.conditions array: Table 12.2. status.conditions Field Type Description lastTransitionTime string The time that the condition of the connection transitioned from one status to another. message string The details about last transition in a human readable format. reason string The last status of the transition in a machine readable format. status string The status of the condition. type string The type of the condition. The following table describes the fields for objects in the status.conditions array: Table 12.3. status.outages Field Type Description end string The timestamp from when the connection failure is resolved. endLogs array Connection log entries, including the log entry related to the successful end of the outage. message string A summary of outage details in a human readable format. start string The timestamp from when the connection failure is first detected. startLogs array Connection log entries, including the original failure. Connection log fields The fields for a connection log entry are described in the following table. The object is used in the following fields: status.failures[] status.successes[] status.outages[].startLogs[] status.outages[].endLogs[] Table 12.4. Connection log object Field Type Description latency string Records the duration of the action. message string Provides the status in a human readable format. reason string Provides the reason for status in a machine readable format. The value is one of TCPConnect , TCPConnectError , DNSResolve , DNSError . success boolean Indicates if the log entry is a success or failure. time string The start time of connection check. 12.4. Verifying network connectivity for an endpoint As a cluster administrator, you can verify the connectivity of an endpoint, such as an API server, load balancer, service, or pod. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure To list the current PodNetworkConnectivityCheck objects, enter the following command: USD oc get podnetworkconnectivitycheck -n openshift-network-diagnostics Example output NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m View the connection test logs: From the output of the command, identify the endpoint that you want to review the connectivity logs for. To view the object, enter the following command: USD oc get podnetworkconnectivitycheck <name> \ -n openshift-network-diagnostics -o yaml where <name> specifies the name of the PodNetworkConnectivityCheck object. Example output apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics ... spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: "" status: conditions: - lastTransitionTime: "2021-01-13T20:11:34Z" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: "True" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" outages: - end: "2021-01-13T20:11:34Z" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T20:11:34Z" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" message: Connectivity restored after 2m59.999789186s start: "2021-01-13T20:08:34Z" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:14:34Z" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:13:34Z" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:12:34Z" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:11:34Z" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:10:34Z" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:09:34Z" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:08:34Z" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:07:34Z" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:06:34Z" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:05:34Z" | [
"oc get podnetworkconnectivitycheck -n openshift-network-diagnostics",
"NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m",
"oc get podnetworkconnectivitycheck <name> -n openshift-network-diagnostics -o yaml",
"apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: \"\" status: conditions: - lastTransitionTime: \"2021-01-13T20:11:34Z\" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: \"True\" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" outages: - end: \"2021-01-13T20:11:34Z\" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T20:11:34Z\" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" message: Connectivity restored after 2m59.999789186s start: \"2021-01-13T20:08:34Z\" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:14:34Z\" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:13:34Z\" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:12:34Z\" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:11:34Z\" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:10:34Z\" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:09:34Z\" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:08:34Z\" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:07:34Z\" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:06:34Z\" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:05:34Z\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/verifying-connectivity-endpoint |
Preface | Preface This guide describes the updates in Eclipse Vert.x 4 release. Use the information to upgrade your Eclipse Vert.x 3.x applications to Eclipse Vert.x 4. It provides information about the new, deprecated and unsupported features in this release. Depending on the modules used in your application, you can read the relevant section to know more about the changes in Eclipse Vert.x 4. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_4.3_migration_guide/pr01 |
Chapter 11. Configuring polyinstantiated directories | Chapter 11. Configuring polyinstantiated directories By default, all programs, services, and users use the /tmp , /var/tmp , and home directories for temporary storage. This makes these directories vulnerable to race condition attacks and information leaks based on file names. You can make /tmp/ , /var/tmp/ , and the home directory instantiated so that they are no longer shared between all users, and each user's /tmp-inst and /var/tmp/tmp-inst is separately mounted to the /tmp and /var/tmp directory. Procedure Enable polyinstantiation in SELinux: You can verify that polyinstantiation is enabled in SELinux by entering the getsebool allow_polyinstantiation command. Create the directory structure for data persistence over reboot with the necessary permissions: Restore the entire security context including the SELinux user part: If your system uses the fapolicyd application control framework, allow fapolicyd to monitor file access events on the underlying file system when they are bind mounted by enabling the allow_filesystem_mark option in the /etc/fapolicyd/fapolicyd.conf configuration file. Enable instantiation of the /tmp , /var/tmp/ , and users' home directories: Important Use /etc/security/namespace.conf instead of a separate file in the /etc/security/namespace.d/ directory because the pam_namespace_helper program does not read additional files in /etc/security/namespace.d . On a system with multi-level security (MLS), uncomment the last three lines in the /etc/security/namespace.conf file: On a system without multi-level security (MLS), add the following lines in the /etc/security/namespace.conf file: Verify that the pam_namespace.so module is configured for the session: Optional: Enable cloud users to access the system with SSH keys: Install the openssh-keycat package. Create a file in the /etc/ssh/sshd_config.d/ directory with the following content: Verify that public key authentication is enabled by checking that the PubkeyAuthentication variable in sshd_config is set to yes . By default, PubkeyAuthentication is set to yes, even though the line in sshd_config is commented out. Add the session required pam_namespace.so unmnt_remnt entry into the module for each service for which polyinstantiation should apply, after the session include system-auth line. For example, in /etc/pam.d/su , /etc/pam.d/sudo , /etc/pam.d/ssh , and /etc/pam.d/sshd : Verification Log in as a non-root user. Users that were logged in before polyinstantiation was configured must log out and log in before the changes take effect for them. Check that the /tmp/ directory is mounted under /tmp-inst/ : The SOURCE output differs based on your environment. * On virutal systems, it shows /dev/vda_<number>_ . * On bare-metal systems it shows /dev/sda_<number>_ or /dev/nvme* Additional resources /usr/share/doc/pam/txts/README.pam_namespace readme file installed with the pam package. | [
"setsebool -P allow_polyinstantiation 1",
"mkdir /tmp-inst /var/tmp/tmp-inst --mode 000",
"restorecon -Fv /tmp-inst /var/tmp/tmp-inst Relabeled /tmp-inst from unconfined_u:object_r:default_t:s0 to system_u:object_r:tmp_t:s0 Relabeled /var/tmp/tmp-inst from unconfined_u:object_r:tmp_t:s0 to system_u:object_r:tmp_t:s0",
"allow_filesystem_mark = 1",
"/tmp /tmp-inst/ level root,adm /var/tmp /var/tmp/tmp-inst/ level root,adm USDHOME USDHOME/USDUSER.inst/ level",
"/tmp /tmp-inst/ user root,adm /var/tmp /var/tmp/tmp-inst/ user root,adm USDHOME USDHOME/USDUSER.inst/ user",
"grep namespace /etc/pam.d/login session required pam_namespace.so",
"AuthorizedKeysCommand /usr/libexec/openssh/ssh-keycat AuthorizedKeysCommandRunAs root",
"grep -r PubkeyAuthentication /etc/ssh/ /etc/ssh/sshd_config:#PubkeyAuthentication yes",
"[...] session include system-auth session required pam_namespace.so unmnt_remnt [...]",
"findmnt --mountpoint /tmp/ TARGET SOURCE FSTYPE OPTIONS /tmp /dev/vda1[/tmp-inst/ <user> ] xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_selinux/configuring-polyinstantiated-directories_using-selinux |
Chapter 1. Camel Spring Boot release notes | Chapter 1. Camel Spring Boot release notes 1.1. Camel Spring Boot features Camel Spring Boot introduces Camel support for Spring Boot which provides auto-configuration of the Camel and starters for many Camel components. The opinionated auto-configuration of the Camel context auto-detects Camel routes available in the Spring context and registers the key Camel utilities (like producer template, consumer template and the type converter) as beans. 1.2. Supported platforms, configurations, databases, and extensions for Camel Spring Boot For information about supported platforms, configurations, and databases in Camel Spring Boot, see the Supported Configuration page on the Customer Portal (login required). For a list of Red Hat Camel Spring Boot extensions, see the Camel Spring Boot Reference (login required). 1.3. Important notes Documentation for Camel Spring Boot components is available in the Camel Spring Boot Reference . Documentation for additional Camel Spring Boot components will be added to this reference guide. Migration from Fuse 7.11 to Camel Spring Boot This release contains a Migration Guide documenting the changes required to successfully run and deploy Fuse 7.11 applications on Camel Spring Boot. It provides information on how to resolve deployment and runtime problems and prevent changes in application behavior. Migration is the first step in moving to the Camel Spring Boot platform. Once the application deploys successfully and runs, users can plan to upgrade individual components to use the new functions and features of Camel Spring Boot. Support for EIP circuit breaker The Circuit Breaker EIP for Camel Spring Boot supports Resilience4j configuration. This configuration provides integration with Resilience4j to be used as Circuit Breaker in Camel routes. Technology Preview extensions The following extensions are supported as Technology Preview for CSB 3.20 release version. camel-spring-batch-starter camel-spring-jdbc-starter camel-spring-ldap-starter camel-spring-rabbitmq-starter camel-spring-redis-starter camel-spring-security-starter camel-spring-ws-starter 1.4. Camel Spring Boot Fixed Issues The following sections list the issues that have been fixed in Camel Spring Boot. Section 1.4.1, "Camel Spring Boot version 3.20.7 Fixed Issues" Section 1.4.2, "Camel Spring Boot version 3.20.6 Fixed Issues" Section 1.4.3, "Camel Spring Boot version 3.20.5 Fixed Issues" Section 1.4.4, "Camel Spring Boot version 3.20.4 Fixed Issues" Section 1.4.5, "Camel Spring Boot version 3.20.3 Fixed Issues" Section 1.4.6, "Camel Spring Boot version 3.20.2 Fixed Issues" Section 1.4.7, "Camel Spring Boot version 3.20.1 Update 1 Fixed Issues" Section 1.4.8, "Camel Spring Boot version 3.20 Fixed Issues" 1.4.1. Camel Spring Boot version 3.20.7 Fixed Issues The following table lists the resolved bugs in Camel Spring Boot version 3.20.7 Table 1.1. Camel Spring Boot version 3.20.7 Resolved Bugs Issue Description CSB-4621 CVE-2024-5971 undertow: response write hangs in case of Java 17 TLSv1.3 NewSessionTicket CSB-4963 CVE-2024-29736 org.apache.cxf/cxf-rt-rs-service-description: SSRF via WADL stylesheet parameter CSB-4972 CVE-2024-32007 org.apache.cxf/cxf-rt-rs-security-jose: apache: cxf: org.apache.cxf:cxf-rt-rs-security-jose: Denial of Service vulnerability in JOSE CSB-5000 CVE-2023-42809 org.redisson/redisson: Redisson vulnerable to Deserialization of Untrusted Data] CSB-5025 CVE-2024-7885 undertow: Improper State Management in Proxy Protocol parsing causes information leakage CSB-5385 CVE-2023-52428 com.nimbusds/nimbus-jose-jwt: large JWE p2c header value causes Denial of Service CSB-5401 CVE-2024-45294 ca.uhn.hapi.fhir/org.hl7.fhir.dstu2016may: XXE vulnerability in XSLT transforms in org.hl7.fhir.core CSB-5404 CVE-2024-45294 ca.uhn.hapi.fhir/org.hl7.fhir.dstu3: XXE vulnerability in XSLT transforms in org.hl7.fhir.core CSB-5407 CVE-2024-45294 ca.uhn.hapi.fhir/org.hl7.fhir.r4: XXE vulnerability in XSLT transforms in org.hl7.fhir.core CSB-5410 CVE-2024-45294 ca.uhn.hapi.fhir/org.hl7.fhir.r5: XXE vulnerability in XSLT transforms in org.hl7.fhir.core CSB-5413 CVE-2024-45294 ca.uhn.hapi.fhir/org.hl7.fhir.utilities: XXE vulnerability in XSLT transforms in org.hl7.fhir.core 1.4.2. Camel Spring Boot version 3.20.6 Fixed Issues The following table lists the resolved bugs in Camel Spring Boot version 3.20.6 Table 1.2. Camel Spring Boot version 3.20.6 Resolved Bugs Issue Description CSB-3963 CVE-2024-28752 cxf-core: Apache CXF SSRF Vulnerability using the Aegis databinding CSB-4099 CVE-2024-22262 springframework: URL Parsing with Host Validation CSB-4133 CVE-2023-44483 santuario: Private Key disclosure in debug-log output CSB-4328 CVE-2022-34169 xalan: OpenJDK: integer truncation issue in Xalan-J (JAXP, 8285407) CSB-4434 CVE-2022-45685 jettison: stack overflow in JSONObject() allows attackers to cause a Denial of Service (DoS) via crafted JSON data 1.4.3. Camel Spring Boot version 3.20.5 Fixed Issues The following table lists the resolved bugs in Camel Spring Boot version 3.20.5 Table 1.3. Camel Spring Boot version 3.20.5 Resolved Bugs Issue Description CSB-3313 CVE-2023-51074 json-path: stack-based buffer overflow in Criteria.parse method 1.4.4. Camel Spring Boot version 3.20.4 Fixed Issues The following table lists the resolved bugs in Camel Spring Boot version 3.20.4. Table 1.4. Camel Spring Boot version 3.20.4 Resolved Bugs Issue Description CSB-2942 CVE-2023-5072 JSON-java: parser confusion leads to OOM 1.4.5. Camel Spring Boot version 3.20.3 Fixed Issues The following table lists the resolved bugs in Camel Spring Boot version 3.20.3 Table 1.5. Camel Spring Boot version 3.20.3 Resolved Bugs Issue Description CSB-2688 CVE-2023-44487 netty-codec-http2: HTTP/2: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) CSB-2694 CVE-2023-44487 undertow: HTTP/2: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) 1.4.6. Camel Spring Boot version 3.20.2 Fixed Issues The following table lists the resolved bugs in Camel Spring Boot version 3.20.2 Table 1.6. Camel Spring Boot version 3.20.2 Resolved Bugs Issue Description CSB-2340 CVE-2023-20873 spring-boot: Security Bypass With Wildcard Pattern Matching on Cloud Foundry [rhint-camel-spring-boot-3.20] CSB-2350 CVE-2023-34455 snappy-java: Unchecked chunk length leads to DoS [rhint-camel-spring-boot-3.20] 1.4.7. Camel Spring Boot version 3.20.1 Update 1 Fixed Issues The following table lists the resolved bugs in Camel Spring Boot version 3.20.1 Update 1. Table 1.7. Camel Spring Boot version 3.20.1 Update 1 Resolved Bugs Issue Description CSB-1524 CVE-2022-31690 spring-security-oauth2-client: Privilege Escalation in spring-security-oauth2-client [rhint-camel-spring-boot-3] CSB-1718 CVE-2023-20883 spring-boot: Spring Boot Welcome Page DoS Vulnerability [rhint-camel-spring-boot-3.20] CSB-1719 CVE-2023-24815 vertx-web: StaticHandler disclosure of classpath resources on Windows when mounted on a wildcard route [rhint-camel-spring-boot-3.20] CSB-1760 CXF TrustedAuthorityValidatorTest failure CSB-1821 Backport CAMEL-19421 - Camel-Jira: Use Files.createTempFile in FileConverter instead of creating File directly 1.4.8. Camel Spring Boot version 3.20 Fixed Issues The following table lists the resolved bugs in Camel Spring Boot version 3.20. Table 1.8. Camel Spring Boot version 3.20 Resolved Bugs Issue Description CSB-656 CVE-2022-25857 snakeyaml: Denial of Service due to missing nested depth limitation for collections [rhint-camel-spring-boot-3] CSB-699 CVE-2022-40156 xstream: Xstream to serialise XML data was vulnerable to Denial of Service attacks [rhint-camel-spring-boot-3] CSB-702 CVE-2022-40152 woodstox-core: woodstox to serialise XML data was vulnerable to Denial of Service attacks [rhint-camel-spring-boot-3] CSB-703 CVE-2022-40151 xstream: Xstream to serialise XML data was vulnerable to Denial of Service attacks [rhint-camel-spring-boot-3] CSB-714 CVE-2022-38752 snakeyaml: Uncaught exception in java.base/java.util.ArrayList.hashCode [rhint-camel-spring-boot-3] CSB-715 CVE-2022-38751 snakeyaml: Uncaught exception in java.base/java.util.regex.PatternUSDQues.match [rhint-camel-spring-boot-3] CSB-716 CVE-2022-38750 snakeyaml: Uncaught exception in org.yaml.snakeyaml.constructor.BaseConstructor.constructObject [rhint-camel-spring-boot-3] CSB-717 CVE-2022-38749 snakeyaml: Uncaught exception in org.yaml.snakeyaml.composer.Composer.composeSequenceNode [rhint-camel-spring-boot-3] CSB-719 CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS [rhint-camel-spring-boot-3] CSB-720 CVE-2022-42004 jackson-databind: use of deeply nested arrays [rhint-camel-spring-boot-3] CSB-721 CVE-2022-41852 JXPath: untrusted XPath expressions may lead to RCE attack [rhint-camel-spring-boot-3] CSB-722 CVE-2022-41853 hsqldb: Untrusted input may lead to RCE attack [rhint-camel-spring-boot-3] CSB-751 CVE-2022-33681 org.apache.pulsar-pulsar-client: Apache Pulsar: Improper Hostname Verification in Java Client and Proxy can expose authentication data via MITM [rhint-camel-spring-boot-3] CSB-794 CVE-2022-40150 jettison: memory exhaustion via user-supplied XML or JSON data [rhint-camel-spring-boot-3] CSB-811 CVE-2022-39368 scandium: Failing DTLS handshakes may cause throttling to block processing of records [rhint-camel-spring-boot-3] CSB-813 CVE-2022-31777 apache-spark: XSS vulnerability in log viewer UI Javascript [rhint-camel-spring-boot-3] CSB-819 camel-kafka-starter: KafkaConsumerHealthCheckIT is not working CSB-820 l2x6 cq-maven-plugin setting wrong version for camel-avro-rpc-component CSB-851 camel-cxf-rest-starter: EchoService is not an interface error on JDK 17 CSB-852 camel-infinispan-starter : tests fail on FIPS enabled environment CSB-883 CVE-2022-37866 apache-ivy: : Apache Ivy: Ivy Path traversal [rhint-camel-spring-boot-3] CSB-904 CVE-2022-41881 codec-haproxy: HAProxyMessageDecoder Stack Exhaustion DoS [rhint-camel-spring-boot-3] CSB-905 CVE-2022-41854 dev-java-snakeyaml: dev-java/snakeyaml: DoS via stack overflow [rhint-camel-spring-boot-3] CSB-906 [archetype] OMP version in openshift profile CSB-929 CVE-2022-38648 batik: Server-Side Request Forgery [rhint-camel-spring-boot-3] CSB-930 CVE-2022-38398 batik: Server-Side Request Forgery [rhint-camel-spring-boot-3] CSB-931 CVE-2022-40146 batik: Server-Side Request Forgery (SSRF) vulnerability [rhint-camel-spring-boot-3] CSB-942 CVE-2022-4492 undertow: Server identity in https connection is not checked by the undertow client [rhint-camel-spring-boot-3] CSB-1203 CVE-2022-45047 sshd-common: mina-sshd: Java unsafe deserialization vulnerability CSB-1239 SAP quickstart spring-boot examples have circular references CSB-1242 The camel-salesforce-maven-plugin:3.20.1 fails when running with openJDK11 in FIPS mode CSB-1274 CVE-2021-37533 apache-commons-net: FTP client trusts the host from PASV response by default [rhint-camel-spring-boot-3] CSB-1334 CVE-2023-24998 tomcat: Apache Commons FileUpload: FileUpload DoS with excessive parts [rhint-camel-spring-boot-3] CSB-1335 CVE-2022-41966 xstream: Denial of Service by injecting recursive collections or maps based on element's hash values raising a stack overflow [rhint-camel-spring-boot-3] CSB-1373 FIPS-mode: Invalid algorythms & security issues on some camel components CSB-1404 The Spring Boot version is wrong in the BOM CSB-1436 CVE-2023-20860 springframework: Security Bypass With Un-Prefixed Double Wildcard Pattern [rhint-camel-spring-boot-3] CSB-1437 CVE-2023-20861 springframework: Spring Expression DoS Vulnerability [rhint-camel-spring-boot-3] CSB-1441 CVE-2022-42890 batik: Untrusted code execution in Apache XML Graphics Batik [rhint-camel-spring-boot-3] CSB-1442 CVE-2022-41704 batik: Apache XML Graphics Batik vulnerable to code execution via SVG [rhint-camel-spring-boot-3] CSB-1443 CVE-2022-37865 apache-ivy: Directory Traversal [rhint-camel-spring-boot-3] CSB-1444 CVE-2023-22602 shiro-core: shiro: Authentication bypass through a specially crafted HTTP request [rhint-camel-spring-boot-3] CSB-1482 CVE-2023-1436 jettison: Uncontrolled Recursion in JSONArray [rhint-camel-spring-boot-3] CSB-1499 Classes generated by camel-openapi-rest-dsl-generator are not added to jar CSB-1533 [cxfrs-component] camel-cxf-rest-starter needs cxf-spring-boot-autoconfigure CSB-1536 CVE-2023-20863 springframework: Spring Expression DoS Vulnerability [rhint-camel-spring-boot-3.14] CSB-1540 CVE-2023-1370 json-smart: Uncontrolled Resource Consumption vulnerability in json-smart (Resource Exhaustion) [rhint-camel-spring-boot-3.18] 1.5. Advisories related to this release The following advisories have been issued to document enhancements, bugfixes, and CVE fixes included in this release. RHSA-2024:6883 RHSA-2024:3708 RHSA-2024:0792 RHSA-2023:7845 RHSA-2023:6079 RHSA-2023:5148 RHSA-2023:3740 RHSA-2023:2100 1.6. Additional resources Supported Configurations Camel Spring Boot Reference Getting Started with Camel Spring Boot Migration Guide | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/release_notes_for_red_hat_build_of_apache_camel_for_spring_boot_3.20/camel-spring-boot-relnotes_integration |
15.5.2. Useful Websites | 15.5.2. Useful Websites http://www.rpm.org/ - The RPM website. http://www.redhat.com/mailman/listinfo/rpm-list/ - The RPM mailing list is archived here. To subscribe, send mail to [email protected] with the word subscribe in the subject line. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/package_management_with_rpm-additional_resources-useful_websites |
Chapter 22. General Parameters and Modules | Chapter 22. General Parameters and Modules This chapter is provided to illustrate some of the possible parameters available for common hardware device drivers [10] , which under Red Hat Enterprise Linux are called kernel modules . In most cases, the default parameters do work. However, there may be times when extra module parameters are necessary for a device to function properly or to override the module's default parameters for the device. During installation, Red Hat Enterprise Linux uses a limited subset of device drivers to create a stable installation environment. Although the installation program supports installation on many different types of hardware, some drivers (including those for SCSI adapters and network adapters) are not included in the installation kernel. Rather, they must be loaded as modules by the user at boot time. Once installation is completed, support exists for a large number of devices through kernel modules. Important Red Hat provides a large number of unsupported device drivers in groups of packages called kernel-smp-unsupported- <kernel-version> and kernel-hugemem-unsupported- <kernel-version> . Replace <kernel-version> with the version of the kernel installed on the system. These packages are not installed by the Red Hat Enterprise Linux installation program, and the modules provided are not supported by Red Hat, Inc. 22.1. Kernel Module Utilities A group of commands for managing kernel modules is available if the module-init-tools package is installed. Use these commands to determine if a module has been loaded successfully or when trying different modules for a piece of new hardware. The command /sbin/lsmod displays a list of currently loaded modules. For example: For each line, the first column is the name of the module, the second column is the size of the module, and the third column is the use count. The /sbin/lsmod output is less verbose and easier to read than the output from viewing /proc/modules . To load a kernel module, use the /sbin/modprobe command followed by the kernel module name. By default, modprobe attempts to load the module from the /lib/modules/ <kernel-version> /kernel/drivers/ subdirectories. There is a subdirectory for each type of module, such as the net/ subdirectory for network interface drivers. Some kernel modules have module dependencies, meaning that other modules must be loaded first for it to load. The /sbin/modprobe command checks for these dependencies and loads the module dependencies before loading the specified module. For example, the command loads any module dependencies and then the e100 module. To print to the screen all commands as /sbin/modprobe executes them, use the -v option. For example: Output similar to the following is displayed: The /sbin/insmod command also exists to load kernel modules; however, it does not resolve dependencies. Thus, it is recommended that the /sbin/modprobe command be used. To unload kernel modules, use the /sbin/rmmod command followed by the module name. The rmmod utility only unloads modules that are not in use and that are not a dependency of other modules in use. For example, the command unloads the e100 kernel module. Another useful kernel module utility is modinfo . Use the command /sbin/modinfo to display information about a kernel module. The general syntax is: Options include -d , which displays a brief description of the module, and -p , which lists the parameters the module supports. For a complete list of options, refer to the modinfo man page ( man modinfo ). [10] A driver is software which enables Linux to use a particular hardware device. Without a driver, the kernel cannot communicate with attached devices. | [
"Module Size Used by tun 11585 1 autofs4 21573 1 hidp 16193 2 rfcomm 37849 0 l2cap 23873 10 hidp,rfcomm bluetooth 50085 5 hidp,rfcomm,l2cap sunrpc 153725 1 dm_mirror 29073 0 dm_mod 57433 1 dm_mirror video 17221 0 sbs 16257 0 i2c_ec 5569 1 sbs container 4801 0 button 7249 0 battery 10565 0 asus_acpi 16857 0 ac 5701 0 ipv6 246113 12 lp 13065 0 parport_pc 27493 1 parport 37001 2 lp,parport_pc uhci_hcd 23885 0 floppy 57317 1 sg 34653 0 snd_ens1371 26721 1 gameport 16073 1 snd_ens1371 snd_rawmidi 24897 1 snd_ens1371 snd_ac97_codec 91360 1 snd_ens1371 snd_ac97_bus 2753 1 snd_ac97_codec snd_seq_dummy 4293 0 snd_seq_oss 32705 0 serio_raw 7493 0 snd_seq_midi_event 8001 1 snd_seq_oss snd_seq 51633 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi_event snd_seq_device 8781 4 snd_rawmidi,snd_seq_dummy,snd_seq_oss,snd_seq snd_pcm_oss 42849 0 snd_mixer_oss 16833 1 snd_pcm_oss snd_pcm 76485 3 snd_ens1371,snd_ac97_codec,snd_pcm_oss snd_timer 23237 2 snd_seq,snd_pcm snd 52933 12 snd_ens1371,snd_rawmidi,snd_ac97_codec,snd_seq_oss,snd_seq,snd_seq_device,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer soundcore 10145 1 snd i2c_piix4 8909 0 ide_cd 38625 3 snd_page_alloc 10569 1 snd_pcm i2c_core 21697 2 i2c_ec,i2c_piix4 pcnet32 34117 0 cdrom 34913 1 ide_cd mii 5825 1 pcnet32 pcspkr 3521 0 ext3 129737 2 jbd 58473 1 ext3 mptspi 17353 3 scsi_transport_spi 25025 1 mptspi mptscsih 23361 1 mptspi sd_mod 20929 16 scsi_mod 134121 5 sg,mptspi,scsi_transport_spi,mptscsih,sd_mod mptbase 52193 2 mptspi,mptscsih",
"/sbin/modprobe e100",
"/sbin/modprobe -v e100",
"/sbin/insmod /lib/modules/2.6.9-5.EL/kernel/drivers/net/e100.ko Using /lib/modules/2.6.9-5.EL/kernel/drivers/net/e100.ko Symbol version prefix 'smp_'",
"/sbin/rmmod e100",
"/sbin/modinfo [options] <module>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/ch-modules |
20.45. Display or Set Block I/O Parameters | 20.45. Display or Set Block I/O Parameters The blkiotune command sets or displays the I/O parameters for a specified guest virtual machine. The following format should be used: More information on this command can be found in the Virtualization Tuning and Optimization Guide | [
"virsh blkiotune domain [--weight weight ] [--device-weights device-weights ] [---device-read-iops-sec -device-read-iops-sec ] [--device-write-iops-sec device-write-iops-sec ] [--device-read-bytes-sec device-read-bytes-sec ] [--device-write-bytes-sec device-write-bytes-sec ] [[--config] [--live] | [--current]]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Display_or_set_block_IO_parameters |
Chapter 3. Getting started | Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites You must complete the installation procedure for your environment. You must have an AMQP 1.0 message broker listening for connections on interface localhost and port 5672 . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named examples . For more information, see Creating a queue . 3.2. Running Hello World on Red Hat Enterprise Linux The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Procedure Copy the examples to a location of your choosing. USD cp -r /usr/share/proton/examples/cpp cpp-examples Create a build directory and change to that directory: USD mkdir cpp-examples/bld USD cd cpp-examples/bld Use cmake to configure the build and use make to compile the examples. USD cmake .. USD make Run the helloworld program. USD ./helloworld Hello World! | [
"cp -r /usr/share/proton/examples/cpp cpp-examples",
"mkdir cpp-examples/bld cd cpp-examples/bld",
"cmake .. make",
"./helloworld Hello World!"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_cpp_client/getting_started |
Chapter 3. Configuring the internal OAuth server | Chapter 3. Configuring the internal OAuth server 3.1. OpenShift Container Platform OAuth server The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 3.2. OAuth token request flows and responses The OAuth server supports standard authorization code grant and the implicit grant OAuth authorization flows. When requesting an OAuth token using the implicit grant flow ( response_type=token ) with a client_id configured to request WWW-Authenticate challenges (like openshift-challenging-client ), these are the possible server responses from /oauth/authorize , and how they should be handled: Status Content Client response 302 Location header containing an access_token parameter in the URL fragment ( RFC 6749 section 4.2.2 ) Use the access_token value as the OAuth token. 302 Location header containing an error query parameter ( RFC 6749 section 4.1.2.1 ) Fail, optionally surfacing the error (and optional error_description ) query values to the user. 302 Other Location header Follow the redirect, and process the result using these rules. 401 WWW-Authenticate header present Respond to challenge if type is recognized (e.g. Basic , Negotiate , etc), resubmit request, and process the result using these rules. 401 WWW-Authenticate header missing No challenge authentication is possible. Fail and show response body (which might contain links or details on alternate methods to obtain an OAuth token). Other Other Fail, optionally surfacing response body to the user. 3.3. Options for the internal OAuth server Several configuration options are available for the internal OAuth server. 3.3.1. OAuth token duration options The internal OAuth server generates two kinds of tokens: Token Description Access tokens Longer-lived tokens that grant access to the API. Authorize codes Short-lived tokens whose only use is to be exchanged for an access token. You can configure the default duration for both types of token. If necessary, you can override the duration of the access token by using an OAuthClient object definition. 3.3.2. OAuth grant options When the OAuth server receives token requests for a client to which the user has not previously granted permission, the action that the OAuth server takes is dependent on the OAuth client's grant strategy. The OAuth client requesting token must provide its own grant strategy. You can apply the following default methods: Grant option Description auto Auto-approve the grant and retry the request. prompt Prompt the user to approve or deny the grant. 3.4. Configuring the internal OAuth server's token duration You can configure default options for the internal OAuth server's token duration. Important By default, tokens are only valid for 24 hours. Existing sessions expire after this time elapses. If the default time is insufficient, then this can be modified using the following procedure. Procedure Create a configuration file that contains the token duration options. The following file sets this to 48 hours, twice the default. apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1 1 Set accessTokenMaxAgeSeconds to control the lifetime of access tokens. The default lifetime is 24 hours, or 86400 seconds. This attribute cannot be negative. If set to zero, the default lifetime is used. Apply the new configuration file: Note Because you update the existing OAuth server, you must use the oc apply command to apply the change. USD oc apply -f </path/to/file.yaml> Confirm that the changes are in effect: USD oc describe oauth.config.openshift.io/cluster Example output ... Spec: Token Config: Access Token Max Age Seconds: 172800 ... 3.5. Configuring token inactivity timeout for the internal OAuth server You can configure OAuth tokens to expire after a set period of inactivity. By default, no token inactivity timeout is set. Note If the token inactivity timeout is also configured in your OAuth client, that value overrides the timeout that is set in the internal OAuth server configuration. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have configured an identity provider (IDP). Procedure Update the OAuth configuration to set a token inactivity timeout. Edit the OAuth object: USD oc edit oauth cluster Add the spec.tokenConfig.accessTokenInactivityTimeout field and set your timeout value: apiVersion: config.openshift.io/v1 kind: OAuth metadata: ... spec: tokenConfig: accessTokenInactivityTimeout: 400s 1 1 Set a value with the appropriate units, for example 400s for 400 seconds, or 30m for 30 minutes. The minimum allowed timeout value is 300s . Save the file to apply the changes. Check that the OAuth server pods have restarted: USD oc get clusteroperators authentication Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 145m Check that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes. USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.17.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Verification Log in to the cluster with an identity from your IDP. Execute a command and verify that it was successful. Wait longer than the configured timeout without using the identity. In this procedure's example, wait longer than 400 seconds. Try to execute a command from the same identity's session. This command should fail because the token should have expired due to inactivity longer than the configured timeout. Example output error: You must be logged in to the server (Unauthorized) 3.6. Customizing the internal OAuth server URL You can customize the internal OAuth server URL by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Warning If you update the internal OAuth server URL, you might break trust from components in the cluster that need to communicate with the OpenShift OAuth server to retrieve OAuth access tokens. Components that need to trust the OAuth server will need to include the proper CA bundle when calling OAuth endpoints. For example: USD oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1 1 For self-signed certificates, the ca.crt file must contain the custom CA certificate, otherwise the login will not succeed. The Cluster Authentication Operator publishes the OAuth server's serving certificate in the oauth-serving-cert config map in the openshift-config-managed namespace. You can find the certificate in the data.ca-bundle.crt key of the config map. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 3.7. OAuth server metadata Applications running in OpenShift Container Platform might have to discover information about the built-in OAuth server. For example, they might have to discover what the address of the <namespace_route> is without manual configuration. To aid in this, OpenShift Container Platform implements the IETF OAuth 2.0 Authorization Server Metadata draft specification. Thus, any application running inside the cluster can issue a GET request to https://openshift.default.svc/.well-known/oauth-authorization-server to fetch the following information: 1 The authorization server's issuer identifier, which is a URL that uses the https scheme and has no query or fragment components. This is the location where .well-known RFC 5785 resources containing information about the authorization server are published. 2 URL of the authorization server's authorization endpoint. See RFC 6749 . 3 URL of the authorization server's token endpoint. See RFC 6749 . 4 JSON array containing a list of the OAuth 2.0 RFC 6749 scope values that this authorization server supports. Note that not all supported scope values are advertised. 5 JSON array containing a list of the OAuth 2.0 response_type values that this authorization server supports. The array values used are the same as those used with the response_types parameter defined by "OAuth 2.0 Dynamic Client Registration Protocol" in RFC 7591 . 6 JSON array containing a list of the OAuth 2.0 grant type values that this authorization server supports. The array values used are the same as those used with the grant_types parameter defined by OAuth 2.0 Dynamic Client Registration Protocol in RFC 7591 . 7 JSON array containing a list of PKCE RFC 7636 code challenge methods supported by this authorization server. Code challenge method values are used in the code_challenge_method parameter defined in Section 4.3 of RFC 7636 . The valid code challenge method values are those registered in the IANA PKCE Code Challenge Methods registry. See IANA OAuth Parameters . 3.8. Troubleshooting OAuth API events In some cases the API server returns an unexpected condition error message that is difficult to debug without direct access to the API master log. The underlying reason for the error is purposely obscured in order to avoid providing an unauthenticated user with information about the server's state. A subset of these errors is related to service account OAuth configuration issues. These issues are captured in events that can be viewed by non-administrator users. When encountering an unexpected condition server error during OAuth, run oc get events to view these events under ServiceAccount . The following example warns of a service account that is missing a proper OAuth redirect URI: USD oc get events | grep ServiceAccount Example output 1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> Running oc describe sa/<service_account_name> reports any OAuth events associated with the given service account name. USD oc describe sa/proxy | grep -A5 Events Example output Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> The following is a list of the possible event errors: No redirect URI annotations or an invalid URI is specified Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> Invalid route specified Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io "<name>" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>] Invalid reference type specified Reason Message NoSAOAuthRedirectURIs [no kind "<name>" is registered for version "v1", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>] Missing SA tokens Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens | [
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1",
"oc apply -f </path/to/file.yaml>",
"oc describe oauth.config.openshift.io/cluster",
"Spec: Token Config: Access Token Max Age Seconds: 172800",
"oc edit oauth cluster",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: spec: tokenConfig: accessTokenInactivityTimeout: 400s 1",
"oc get clusteroperators authentication",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 145m",
"oc get clusteroperators kube-apiserver",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.17.0 True False False 145m",
"error: You must be logged in to the server (Unauthorized)",
"oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1",
"oc edit ingress.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2",
"{ \"issuer\": \"https://<namespace_route>\", 1 \"authorization_endpoint\": \"https://<namespace_route>/oauth/authorize\", 2 \"token_endpoint\": \"https://<namespace_route>/oauth/token\", 3 \"scopes_supported\": [ 4 \"user:full\", \"user:info\", \"user:check-access\", \"user:list-scoped-projects\", \"user:list-projects\" ], \"response_types_supported\": [ 5 \"code\", \"token\" ], \"grant_types_supported\": [ 6 \"authorization_code\", \"implicit\" ], \"code_challenge_methods_supported\": [ 7 \"plain\", \"S256\" ] }",
"oc get events | grep ServiceAccount",
"1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"oc describe sa/proxy | grep -A5 Events",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io \"<name>\" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]",
"Reason Message NoSAOAuthRedirectURIs [no kind \"<name>\" is registered for version \"v1\", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]",
"Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/configuring-internal-oauth |
Chapter 16. Installer and image creation | Chapter 16. Installer and image creation The following chapters contain the most notable changes to installer and image creation between RHEL 8 and RHEL 9. 16.1. Installer Anaconda activates network automatically for interactive installations Anaconda now activates the network automatically when performing interactive installation, without requiring users to manually activate it in the network spoke. This update does not change the installation experience for Kickstart installations and installations using the ip= boot option. New options to Lock root account and Allow root SSH login with password RHEL 9 adds following new options to the root password configuration screen: Lock root account : To lock the root access to the machine. Allow root SSH login with password : To enable password-based SSH root logins. During Kickstart installations, you can enable root access via SSH with password by using the --allow-ssh option of the rootpw Kickstart command. For more information, see rootpw (required) . Licensing, system, and user setting configuration screens have been disabled post standard installation Previously, RHEL users configured Licensing, System (Subscription manager), and User Settings prior to gnome-initial-setup and login screens. Starting with RHEL 9, the initial setup screens have been disabled by default to improve user experience. If you must run the initial setup for user creation or license display, install the following packages based on the requirements. To install initial setup packages: To enable initial setup after the reboot of the system. Reboot the system to view initial setup. For Kickstart installations, add initial-setup-gui to the packages section and enable the initial-setup service. The rhsm command for machine provisioning through Kickstart for Satellite is now available The rhsm command replaces the %post scripts for machine provisioning on RHEL 9. The rhsm command helps with all provisioning tasks such as registering the system, attaching RHEL subscriptions, and installing from a Satellite instance. New Kickstart command - timesource The new timesource Kickstart command is optional and it helps to set NTP, NTS servers, and NTP pools that provide time data. It also helps to control enabling or disabling the NTP services on the system. The --ntpservers option from the timezone command has been deprecated and has been replaced with this new command. Support for Anaconda boot arguments without inst. prefix is no longer available Anaconda boot arguments without the inst. prefix have been deprecated since RHEL 7. Support for these boot arguments has been removed in RHEL 9. To continue using these options, use the inst. prefix For example, to force the installation program to run in the text mode instead of the graphical mode, use the following option: Removed Kickstart commands and options The following Kickstart commands and options have been removed from RHEL 9. Using them in Kickstart files causes an error. device deviceprobe dmraid install - use the subcommands or methods directly as commands multipath bootloader --upgrade ignoredisk --interactive partition --active harddrive --biospart autostep Where only specific options and values are listed, the base command and its other options are still available and not removed. Removed boot options The following boot options have been removed from Red Hat Enterprise Linux: inst.zram RHEL 9 does not support the zram service. See the zram-generator(8) man page for more information. inst.singlelang The single language mode is not supported on RHEL 9. inst.loglevel The log level is always set to debug. 16.2. Image creation RHEL 9.5 introduces the following enhancements over the versions: Support to additional Edge image type creation RHEL image builder now supports building RHEL for Edge for AWS edge-ami and Vmware VSphere edge-vsphere . Disk image partition table unification Disk images created by using the RHEL image builder tool, such as qcow2 , ami , vhd , vsphere , and gce , now have a separate boot with 1 GiB of space. Filesystem customization policy changes in image builder The following policy changes are in place when using the RHEL image builder filesystem customization in blueprints: You can set the mountpoint and minimum partition minsize entries in the blueprint. The following image types do not support filesystem customizations: image-installer edge-installer edge-simplified-installer The following image types do not create partitioned operating systems images. edge-commit edge-container tar container Customizing the filesystem of such images has no result. The blueprint now supports the mountpoint customization for the tpm directory and its sub-directories. RHEL image builder supports creating customized files and directories in the /etc directory With the new`[[customizations.files]]` and the [[customizations.directories]] blueprint customizations, you can create customized files and directories in the /etc image directory. Currently, these customizations are only available in the /etc directory. You can use the customizations for all available image types, except image types that deploy OSTree commits, such as: edge-raw-image edge-installer edge-simplified-installer .vhd images built with RHEL image builder now have support for 64-bit ARM You can now build .vhd images using image builder and upload them to the Microsoft Azure cloud. RHEL image builder supports customized file system partitions on LVM With support for customized file system partitions on LVM, if you add any file system customization to your system, the file systems are converted to an LVM partition. RHEL image builder now supports file system configuration As of Red Hat Enterprise Linux 9.0, Image Builder provides support for users to specify a custom filesystem configuration in blueprints to create images with a specific disk layout, instead of using the default layout configuration. RHEL image builder can create bootable ISO Installer images You can use RHEL image builder GUI and CLI to create bootable ISO Installer images. These images consist of a tar file that contains a root file system which you can use to install directly to a bare-metal server. Support to additional Edge image type creation Starting with 9.4, RHEL image builder supports OpenSCAP customizations in the blueprint by adding a tailoring file for scap security profile. You can add customized tailoring options for a profile to the osbuild-composer blueprint customizations by using the following options: selected for the list of rules that you want to add. unselected for the list of rules that you want to remove. When you build an image from the blueprint customized with tailoring file for scap security profile, it creates a tailoring file with a new tailoring profile ID and saves it to the image as /usr/share/xml/osbuild-oscap-tailoring/tailoring.xml . The new profile ID will have _osbuild_tailoring appended as a suffix to the base profile, for example, xccdf_org.ssgproject.content_profile_cis_osbuild_tailoring , if you use the cis base profile. AWS EC2 images now support both BIOS and UEFI boot This update extends the AWS EC2 AMD or Intel 64-bit architecture .ami images created by RHEL image builder to support UEFI boot, in addition to the legacy BIOS boot. Support to build VMware VSphere (OVA) RHEL image builder can build VMware VSphere Open Virtual Appliance (OVA) files that you can deploy more easily to VMware vSphere by using the vSphere GUI client. A new and improved way to create blueprints and images in the RHEL image builder web console With the new unified version of the image builder tool, you can much more easily create blueprints and images. Notable enhancements include the following: You can now use all the customizations previously supported only on the command line, such as kernel, file system, firewall, locale, and other customizations, in the image builder web console. You can import, export, and save blueprints in the .JSON or .TOML format. Ability to create images with support to different partitioning modes With RHEL image builder, you can build VMware VSphere Open Virtual Appliance (OVA) files. You can deploy such files to VMware vSphere by using the vSphere GUI client . Filesystem customization policy changes in image builder The following policy changes are in place when using the RHEL image builder filesystem customization in blueprints: You can set the mountpoint and minimum partition minsize entries in the blueprint. The following image types do not support filesystem customizations: image-installer edge-installer edge-simplified-installer The following image types do not create partitioned operating systems images. edge-commit edge-container tar container Customizing the filesystem of such images has no result. The blueprint now supports the mountpoint customization for the tpm directory and its sub-directories. RHEL image builder supports creating customized files and directories in the /etc directory With the new`[[customizations.files]]` and the [[customizations.directories]] blueprint customizations, you can create customized files and directories in the /etc image directory. Currently, these customizations are only available in the /etc directory. You can use the customizations for all available image types, except image types that deploy OSTree commits, such as: edge-raw-image edge-installer edge-simplified-installer .vhd images built with RHEL image builder now have support for 64-bit ARM You can now build .vhd images using image builder and upload them to the Microsoft Azure cloud. RHEL image builder supports customized file system partitions on LVM With support for customized file system partitions on LVM, if you add any file system customization to your system, the file systems are converted to an LVM partition. RHEL image builder now supports file system configuration As of Red Hat Enterprise Linux 9.0, Image Builder provides support for users to specify a custom filesystem configuration in blueprints to create images with a specific disk layout, instead of using the default layout configuration. RHEL image builder can create bootable ISO Installer images You can use image uilder GUI and CLI to create bootable ISO Installer images. These images consist of a tar file that contains a root file system which you can use to install directly to a bare-metal server. | [
"dnf install initial-setup initial-setup-gui",
"systemctl enable initial-setup",
"firstboot --enable %packages @^graphical-server-environment initial-setup-gui %end",
"inst.text"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_installer-and-image-creation_considerations-in-adopting-rhel-9 |
Chapter 1. Multi-site deployments | Chapter 1. Multi-site deployments Red Hat build of Keycloak supports deployments that consist of multiple Red Hat build of Keycloak instances that connect to each other using its Infinispan caches; load balancers can distribute the load evenly across those instances. Those setups are intended for a transparent network on a single site. The Red Hat build of Keycloak high-availability guide goes one step further to describe setups across multiple sites. While this setup adds additional complexity, that extra amount of high availability may be needed for some environments. 1.1. When to use a multi-site setup The multi-site deployment capabilities of Red Hat build of Keycloak are targeted at use cases that: Are constrained to a single AWS Region. Permit planned outages for maintenance. Fit within a defined user and request count. Can accept the impact of periodic outages. 1.2. Supported Configuration Two Openshift single-AZ clusters, in the same AWS Region Provisioned with Red Hat OpenShift Service on AWS (ROSA), either ROSA HCP or ROSA classic. Each Openshift cluster has all its workers in a single Availability Zone. OpenShift version 4.16 (or later). Amazon Aurora PostgreSQL database High availability with a primary DB instance in one Availability Zone, and a synchronously replicated reader in the second Availability Zone Version 16.1 AWS Global Accelerator, sending traffic to both ROSA clusters AWS Lambda to automate failover Any deviation from the configuration above is not supported and any issue must be replicated in that environment for support. Read more on each item in the Building blocks multi-site deployments chapter. 1.3. Maximum load 100,000 users 300 requests per second See the Concepts for sizing CPU and memory resources chapter for more information. 1.4. Limitations During upgrades of Red Hat build of Keycloak or Data Grid both sites needs to be taken offline for the duration of the upgrade. During certain failure scenarios, there may be downtime of up to 5 minutes. After certain failure scenarios, manual intervention may be required to restore redundancy by bringing the failed site back online. During certain switchover scenarios, there may be downtime of up to 5 minutes. For more details on limitations see the Concepts for multi-site deployments chapter. 1.5. steps The different chapters introduce the necessary concepts and building blocks. For each building block, a blueprint shows how to set a fully functional example. Additional performance tuning and security hardening are still recommended when preparing a production setup. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/high_availability_guide/introduction- |
9.17. Package Group Selection | 9.17. Package Group Selection Now that you have made most of the choices for your installation, you are ready to confirm the default package selection or customize packages for your system. The Package Installation Defaults screen appears and details the default package set for your Red Hat Enterprise Linux installation. This screen varies depending on the version of Red Hat Enterprise Linux you are installing. Important If you install Red Hat Enterprise Linux in text mode, you cannot make package selections. The installer automatically selects packages only from the base and core groups. These packages are sufficient to ensure that the system is operational at the end of the installation process, ready to install updates and new packages. To change the package selection, complete the installation, then use the Add/Remove Software application to make desired changes. Figure 9.48. Package Group Selection By default, the Red Hat Enterprise Linux installation process loads a selection of software that is suitable for a system deployed as a basic server. Note that this installation does not include a graphical environment. To include a selection of software suitable for other roles, click the radio button that corresponds to one of the following options: Basic Server This option provides a basic installation of Red Hat Enterprise Linux for use on a server. Database Server This option provides the MySQL and PostgreSQL databases. Web server This option provides the Apache web server. Enterprise Identity Server Base This option provides OpenLDAP and Enterprise Identity Management (IPA) to create an identity and authentication server. Virtual Host This option provides the KVM and Virtual Machine Manager tools to create a host for virtual machines. Desktop This option provides the OpenOffice.org productivity suite, graphical tools such as the GIMP , and multimedia applications. Software Development Workstation This option provides the necessary tools to compile software on your Red Hat Enterprise Linux system. Minimal This option provides only the packages essential to run Red Hat Enterprise Linux. A minimal installation provides the basis for a single-purpose server or desktop appliance and maximizes performance and security on such an installation. Warning Minimal installation currently does not configure the firewall ( iptables / ip6tables ) by default because the authconfig and system-config-firewall-base packages are missing from the selection. To work around this issue, you can use a Kickstart file to add these packages to your selection. See the Red Hat Customer Portal for details about the workaround, and Chapter 32, Kickstart Installations for information about Kickstart files. If you do not use the workaround, the installation will complete successfully, but no firewall will be configured, presenting a security risk. If you choose to accept the current package list, skip ahead to Section 9.19, "Installing Packages" . To select a component, click on the checkbox beside it (refer to Figure 9.48, "Package Group Selection" ). To customize your package set further, select the Customize now option on the screen. Clicking takes you to the Package Group Selection screen. 9.17.1. Installing from Additional Repositories You can define additional repositories to increase the software available to your system during installation. A repository is a network location that stores software packages along with metadata that describes them. Many of the software packages used in Red Hat Enterprise Linux require other software to be installed. The installer uses the metadata to ensure that these requirements are met for every piece of software you select for installation. The basic options are: The High Availability repository includes packages for high-availability clustering (also known as failover clustering ) using the Red Hat High-availability Service Management component. The Load Balancer repository includes packages for load-balancing clustering using Linux Virtual Server (LVS). The Red Hat Enterprise Linux repository is automatically selected for you. It contains the complete collection of software that was released as Red Hat Enterprise Linux 6.9, with the various pieces of software in their versions that were current at the time of release. The Resilient Storage repository includes packages for storage clustering using the Red Hat global file system (GFS). For more information about clustering with Red Hat Enterprise Linux 6.9, refer to the Red Hat Enterprise Linux 6.9 High Availability Add-On Overview , available from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/High_Availability_Add-On_Overview/index.html . Figure 9.49. Adding a software repository To include software from extra repositories , select Add additional software repositories and provide the location of the repository. To edit an existing software repository location, select the repository in the list and then select Modify repository . If you change the repository information during a non-network installation, such as from a Red Hat Enterprise Linux DVD, the installer prompts you for network configuration information. Figure 9.50. Select network interface Select an interface from the drop-down menu. Click OK . Anaconda then starts NetworkManager to allow you to configure the interface. Figure 9.51. Network Connections For details of how to use NetworkManager , refer to Section 9.7, "Setting the Hostname" If you select Add additional software repositories , the Edit repository dialog appears. Provide a Repository name and the Repository URL for its location. Once you have located a mirror, to determine the URL to use, find the directory on the mirror that contains a directory named repodata . Once you provide information for an additional repository, the installer reads the package metadata over the network. Software that is specially marked is then included in the package group selection system. Warning If you choose Back from the package selection screen, any extra repository data you may have entered is lost. This allows you to effectively cancel extra repositories. Currently there is no way to cancel only a single repository once entered. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-pkgselection-x86 |
Chapter 8. High availability for hosted control planes | Chapter 8. High availability for hosted control planes 8.1. Recovering an unhealthy etcd cluster In a highly available control plane, three etcd pods run as a part of a stateful set in an etcd cluster. To recover an etcd cluster, identify unhealthy etcd pods by checking the etcd cluster health. 8.1.1. Checking the status of an etcd cluster You can check the status of the etcd cluster health by logging into any etcd pod. Procedure Log in to an etcd pod by entering the following command: USD oc rsh -n openshift-etcd -c etcd <etcd_pod_name> Print the health status of an etcd cluster by entering the following command: sh-4.4# etcdctl endpoint status -w table Example output +------------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +------------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.1xxx.20:2379 | 8fxxxxxxxxxx | 3.5.12 | 123 MB | false | false | 10 | 180156 | 180156 | | | https://192.168.1xxx.21:2379 | a5xxxxxxxxxx | 3.5.12 | 122 MB | false | false | 10 | 180156 | 180156 | | | https://192.168.1xxx.22:2379 | 7cxxxxxxxxxx | 3.5.12 | 124 MB | true | false | 10 | 180156 | 180156 | | +-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ 8.1.2. Recovering a failing etcd pod Each etcd pod of a 3-node cluster has its own persistent volume claim (PVC) to store its data. An etcd pod might fail because of corrupted or missing data. You can recover a failing etcd pod and its PVC. Procedure To confirm that the etcd pod is failing, enter the following command: USD oc get pods -l app=etcd -n openshift-etcd Example output NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 64m etcd-1 2/2 Running 0 45m etcd-2 1/2 CrashLoopBackOff 1 (5s ago) 64m The failing etcd pod might have the CrashLoopBackOff or Error status. Delete the failing pod and its PVC by entering the following command: USD oc delete pods etcd-2 -n openshift-etcd Verification Verify that a new etcd pod is up and running by entering the following command: USD oc get pods -l app=etcd -n openshift-etcd Example output NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 67m etcd-1 2/2 Running 0 48m etcd-2 2/2 Running 0 2m2s 8.2. Backing up and restoring etcd in an on-premise environment You can back up and restore etcd on a hosted cluster in an on-premise environment to fix failures. 8.2.1. Backing up and restoring etcd on a hosted cluster in an on-premise environment By backing up and restoring etcd on a hosted cluster, you can fix failures, such as corrupted or missing data in an etcd member of a three node cluster. If multiple members of the etcd cluster encounter data loss or have a CrashLoopBackOff status, this approach helps prevent an etcd quorum loss. Important This procedure requires API downtime. Prerequisites The oc and jq binaries have been installed. Procedure First, set up your environment variables and scale down the API servers: Set up environment variables for your hosted cluster by entering the following commands, replacing values as necessary: USD CLUSTER_NAME=my-cluster USD HOSTED_CLUSTER_NAMESPACE=clusters USD CONTROL_PLANE_NAMESPACE="USD{HOSTED_CLUSTER_NAMESPACE}-USD{CLUSTER_NAME}" Pause reconciliation of the hosted cluster by entering the following command, replacing values as necessary: USD oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{"spec":{"pausedUntil":"true"}}' --type=merge Scale down the API servers by entering the following commands: Scale down the kube-apiserver : USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/kube-apiserver --replicas=0 Scale down the openshift-apiserver : USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/openshift-apiserver --replicas=0 Scale down the openshift-oauth-apiserver : USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/openshift-oauth-apiserver --replicas=0 , take a snapshot of etcd by using one of the following methods: Use a previously backed-up snapshot of etcd. If you have an available etcd pod, take a snapshot from the active etcd pod by completing the following steps: List etcd pods by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd Take a snapshot of the pod database and save it locally to your machine by entering the following commands: USD ETCD_POD=etcd-0 USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl \ --cacert /etc/etcd/tls/etcd-ca/ca.crt \ --cert /etc/etcd/tls/client/etcd-client.crt \ --key /etc/etcd/tls/client/etcd-client.key \ --endpoints=https://localhost:2379 \ snapshot save /var/lib/snapshot.db Verify that the snapshot is successful by entering the following command: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/snapshot.db Make a local copy of the snapshot by entering the following command: USD oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/snapshot.db /tmp/etcd.snapshot.db Make a copy of the snapshot database from etcd persistent storage: List etcd pods by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd Find a pod that is running and set its name as the value of ETCD_POD: ETCD_POD=etcd-0 , and then copy its snapshot database by entering the following command: USD oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/data/member/snap/db /tmp/etcd.snapshot.db , scale down the etcd statefulset by entering the following command: USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=0 Delete volumes for second and third members by entering the following command: USD oc delete -n USD{CONTROL_PLANE_NAMESPACE} pvc/data-etcd-1 pvc/data-etcd-2 Create a pod to access the first etcd member's data: Get the etcd image by entering the following command: USD ETCD_IMAGE=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd -o jsonpath='{ .spec.template.spec.containers[0].image }') Create a pod that allows access to etcd data: USD cat << EOF | oc apply -n USD{CONTROL_PLANE_NAMESPACE} -f - apiVersion: apps/v1 kind: Deployment metadata: name: etcd-data spec: replicas: 1 selector: matchLabels: app: etcd-data template: metadata: labels: app: etcd-data spec: containers: - name: access image: USDETCD_IMAGE volumeMounts: - name: data mountPath: /var/lib command: - /usr/bin/bash args: - -c - |- while true; do sleep 1000 done volumes: - name: data persistentVolumeClaim: claimName: data-etcd-0 EOF Check the status of the etcd-data pod and wait for it to be running by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data Get the name of the etcd-data pod by entering the following command: USD DATA_POD=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} pods --no-headers -l app=etcd-data -o name | cut -d/ -f2) Copy an etcd snapshot into the pod by entering the following command: USD oc cp /tmp/etcd.snapshot.db USD{CONTROL_PLANE_NAMESPACE}/USD{DATA_POD}:/var/lib/restored.snap.db Remove old data from the etcd-data pod by entering the following commands: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm -rf /var/lib/data USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- mkdir -p /var/lib/data Restore the etcd snapshot by entering the following command: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- etcdutl snapshot restore /var/lib/restored.snap.db \ --data-dir=/var/lib/data --skip-hash-check \ --name etcd-0 \ --initial-cluster-token=etcd-cluster \ --initial-cluster etcd-0=https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-1=https://etcd-1.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-2=https://etcd-2.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380 \ --initial-advertise-peer-urls https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380 Remove the temporary etcd snapshot from the pod by entering the following command: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm /var/lib/restored.snap.db Delete data access deployment by entering the following command: USD oc delete -n USD{CONTROL_PLANE_NAMESPACE} deployment/etcd-data Scale up the etcd cluster by entering the following command: USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=3 Wait for the etcd member pods to return and report as available by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd -w Scale up all etcd-writer deployments by entering the following command: USD oc scale deployment -n USD{CONTROL_PLANE_NAMESPACE} --replicas=3 kube-apiserver openshift-apiserver openshift-oauth-apiserver Restore reconciliation of the hosted cluster by entering the following command: USD oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{"spec":{"pausedUntil":""}}' --type=merge 8.3. Backing up and restoring etcd on AWS You can back up and restore etcd on a hosted cluster on Amazon Web Services (AWS) to fix failures. Important Hosted control planes on the AWS platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 8.3.1. Taking a snapshot of etcd for a hosted cluster To back up etcd for a hosted cluster, you must take a snapshot of etcd. Later, you can restore etcd by using the snapshot. Important This procedure requires API downtime. Procedure Pause reconciliation of the hosted cluster by entering the following command: USD oc patch -n clusters hostedclusters/<hosted_cluster_name> -p '{"spec":{"pausedUntil":"true"}}' --type=merge Stop all etcd-writer deployments by entering the following command: USD oc scale deployment -n <hosted_cluster_namespace> --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver To take an etcd snapshot, use the exec command in each etcd container by entering the following command: USD oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db To check the snapshot status, use the exec command in each etcd container by running the following command: USD oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db Copy the snapshot data to a location where you can retrieve it later, such as an S3 bucket. See the following example. Note The following example uses signature version 2. If you are in a region that supports signature version 4, such as the us-east-2 region, use signature version 4. Otherwise, when copying the snapshot to an S3 bucket, the upload fails. Example BUCKET_NAME=somebucket FILEPATH="/USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db" CONTENT_TYPE="application/x-compressed-tar" DATE_VALUE=`date -R` SIGNATURE_STRING="PUT\n\nUSD{CONTENT_TYPE}\nUSD{DATE_VALUE}\nUSD{FILEPATH}" ACCESS_KEY=accesskey SECRET_KEY=secret SIGNATURE_HASH=`echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac USD{SECRET_KEY} -binary | base64` oc exec -it etcd-0 -n USD{HOSTED_CLUSTER_NAMESPACE} -- curl -X PUT -T "/var/lib/data/snapshot.db" \ -H "Host: USD{BUCKET_NAME}.s3.amazonaws.com" \ -H "Date: USD{DATE_VALUE}" \ -H "Content-Type: USD{CONTENT_TYPE}" \ -H "Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}" \ https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{CLUSTER_NAME}-snapshot.db To restore the snapshot on a new cluster later, save the encryption secret that the hosted cluster references. Get the secret encryption key by entering the following command: USD oc get hostedcluster <hosted_cluster_name> -o=jsonpath='{.spec.secretEncryption.aescbc}' {"activeKey":{"name":"<hosted_cluster_name>-etcd-encryption-key"}} Save the secret encryption key by entering the following command: USD oc get secret <hosted_cluster_name>-etcd-encryption-key -o=jsonpath='{.data.key}' You can decrypt this key when restoring a snapshot on a new cluster. steps Restore the etcd snapshot. 8.3.2. Restoring an etcd snapshot on a hosted cluster If you have a snapshot of etcd from your hosted cluster, you can restore it. Currently, you can restore an etcd snapshot only during cluster creation. To restore an etcd snapshot, you modify the output from the create cluster --render command and define a restoreSnapshotURL value in the etcd section of the HostedCluster specification. Note The --render flag in the hcp create command does not render the secrets. To render the secrets, you must use both the --render and the --render-sensitive flags in the hcp create command. Prerequisites You took an etcd snapshot on a hosted cluster. Procedure On the aws command-line interface (CLI), create a pre-signed URL so that you can download your etcd snapshot from S3 without passing credentials to the etcd deployment: ETCD_SNAPSHOT=USD{ETCD_SNAPSHOT:-"s3://USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db"} ETCD_SNAPSHOT_URL=USD(aws s3 presign USD{ETCD_SNAPSHOT}) Modify the HostedCluster specification to refer to the URL: spec: etcd: managed: storage: persistentVolume: size: 4Gi type: PersistentVolume restoreSnapshotURL: - "USD{ETCD_SNAPSHOT_URL}" managementType: Managed Ensure that the secret that you referenced from the spec.secretEncryption.aescbc value contains the same AES key that you saved in the steps. 8.4. Disaster recovery for a hosted cluster in AWS You can recover a hosted cluster to the same region within Amazon Web Services (AWS). For example, you need disaster recovery when the upgrade of a management cluster fails and the hosted cluster is in a read-only state. Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The disaster recovery process involves the following steps: Backing up the hosted cluster on the source management cluster Restoring the hosted cluster on a destination management cluster Deleting the hosted cluster from the source management cluster Your workloads remain running during the process. The Cluster API might be unavailable for a period, but that does not affect the services that are running on the worker nodes. Important Both the source management cluster and the destination management cluster must have the --external-dns flags to maintain the API server URL. That way, the server URL ends with https://api-sample-hosted.sample-hosted.aws.openshift.com . See the following example: Example: External DNS flags --external-dns-provider=aws \ --external-dns-credentials=<path_to_aws_credentials_file> \ --external-dns-domain-filter=<basedomain> If you do not include the --external-dns flags to maintain the API server URL, you cannot migrate the hosted cluster. 8.4.1. Overview of the backup and restore process The backup and restore process works as follows: On management cluster 1, which you can think of as the source management cluster, the control plane and workers interact by using the external DNS API. The external DNS API is accessible, and a load balancer sits between the management clusters. You take a snapshot of the hosted cluster, which includes etcd, the control plane, and the worker nodes. During this process, the worker nodes continue to try to access the external DNS API even if it is not accessible, the workloads are running, the control plane is saved in a local manifest file, and etcd is backed up to an S3 bucket. The data plane is active and the control plane is paused. On management cluster 2, which you can think of as the destination management cluster, you restore etcd from the S3 bucket and restore the control plane from the local manifest file. During this process, the external DNS API is stopped, the hosted cluster API becomes inaccessible, and any workers that use the API are unable to update their manifest files, but the workloads are still running. The external DNS API is accessible again, and the worker nodes use it to move to management cluster 2. The external DNS API can access the load balancer that points to the control plane. On management cluster 2, the control plane and worker nodes interact by using the external DNS API. The resources are deleted from management cluster 1, except for the S3 backup of etcd. If you try to set up the hosted cluster again on mangagement cluster 1, it will not work. 8.4.2. Backing up a hosted cluster To recover your hosted cluster in your target management cluster, you first need to back up all of the relevant data. Procedure Create a configmap file to declare the source management cluster by entering this command: USD oc create configmap mgmt-parent-cluster -n default --from-literal=from=USD{MGMT_CLUSTER_NAME} Shut down the reconciliation in the hosted cluster and in the node pools by entering these commands: USD PAUSED_UNTIL="true" USD oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{"spec":{"pausedUntil":"'USD{PAUSED_UNTIL}'"}}' --type=merge USD oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator USD PAUSED_UNTIL="true" USD oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{"spec":{"pausedUntil":"'USD{PAUSED_UNTIL}'"}}' --type=merge USD oc patch -n USD{HC_CLUSTER_NS} nodepools/USD{NODEPOOLS} -p '{"spec":{"pausedUntil":"'USD{PAUSED_UNTIL}'"}}' --type=merge USD oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator Back up etcd and upload the data to an S3 bucket by running this bash script: Tip Wrap this script in a function and call it from the main function. # ETCD Backup ETCD_PODS="etcd-0" if [ "USD{CONTROL_PLANE_AVAILABILITY_POLICY}" = "HighlyAvailable" ]; then ETCD_PODS="etcd-0 etcd-1 etcd-2" fi for POD in USD{ETCD_PODS}; do # Create an etcd snapshot oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db FILEPATH="/USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db" CONTENT_TYPE="application/x-compressed-tar" DATE_VALUE=`date -R` SIGNATURE_STRING="PUT\n\nUSD{CONTENT_TYPE}\nUSD{DATE_VALUE}\nUSD{FILEPATH}" set +x ACCESS_KEY=USD(grep aws_access_key_id USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed "s/ //g") SECRET_KEY=USD(grep aws_secret_access_key USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed "s/ //g") SIGNATURE_HASH=USD(echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac "USD{SECRET_KEY}" -binary | base64) set -x # FIXME: this is pushing to the OIDC bucket oc exec -it etcd-0 -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- curl -X PUT -T "/var/lib/data/snapshot.db" \ -H "Host: USD{BUCKET_NAME}.s3.amazonaws.com" \ -H "Date: USD{DATE_VALUE}" \ -H "Content-Type: USD{CONTENT_TYPE}" \ -H "Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}" \ https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db done For more information about backing up etcd, see "Backing up and restoring etcd on a hosted cluster". Back up Kubernetes and OpenShift Container Platform objects by entering the following commands. You need to back up the following objects: HostedCluster and NodePool objects from the HostedCluster namespace HostedCluster secrets from the HostedCluster namespace HostedControlPlane from the Hosted Control Plane namespace Cluster from the Hosted Control Plane namespace AWSCluster , AWSMachineTemplate , and AWSMachine from the Hosted Control Plane namespace MachineDeployments , MachineSets , and Machines from the Hosted Control Plane namespace ControlPlane secrets from the Hosted Control Plane namespace USD mkdir -p USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS} USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD chmod 700 USD{BACKUP_DIR}/namespaces/ # HostedCluster USD echo "Backing Up HostedCluster Objects:" USD oc get hc USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml USD echo "--> HostedCluster" USD sed -i '' -e '/^status:USD/,USDd' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml # NodePool USD oc get np USD{NODEPOOLS} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml USD echo "--> NodePool" USD sed -i '' -e '/^status:USD/,USD d' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml # Secrets in the HC Namespace USD echo "--> HostedCluster Secrets:" for s in USD(oc get secret -n USD{HC_CLUSTER_NS} | grep "^USD{HC_CLUSTER_NAME}" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-USD{s}.yaml done # Secrets in the HC Control Plane Namespace USD echo "--> HostedCluster ControlPlane Secrets:" for s in USD(oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} | egrep -v "docker|service-account-token|oauth-openshift|NAME|token-USD{HC_CLUSTER_NAME}" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-USD{s}.yaml done # Hosted Control Plane USD echo "--> HostedControlPlane:" USD oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-USD{HC_CLUSTER_NAME}.yaml # Cluster USD echo "--> Cluster:" USD CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\*} | grep USD{HC_CLUSTER_NAME}) USD oc get cluster USD{CL_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-USD{HC_CLUSTER_NAME}.yaml # AWS Cluster USD echo "--> AWS Cluster:" USD oc get awscluster USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-USD{HC_CLUSTER_NAME}.yaml # AWS MachineTemplate USD echo "--> AWS Machine Template:" USD oc get awsmachinetemplate USD{NODEPOOLS} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-USD{HC_CLUSTER_NAME}.yaml # AWS Machines USD echo "--> AWS Machine:" USD CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\*} | grep USD{HC_CLUSTER_NAME}) for s in USD(oc get awsmachines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --no-headers | grep USD{CL_NAME} | cut -f1 -d\ ); do oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} awsmachines USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-USD{s}.yaml done # MachineDeployments USD echo "--> HostedCluster MachineDeployments:" for s in USD(oc get machinedeployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do mdp_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-USD{mdp_name}.yaml done # MachineSets USD echo "--> HostedCluster MachineSets:" for s in USD(oc get machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do ms_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-USD{ms_name}.yaml done # Machines USD echo "--> HostedCluster Machine:" for s in USD(oc get machine -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do m_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-USD{m_name}.yaml done Clean up the ControlPlane routes by entering this command: USD oc delete routes -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all By entering that command, you enable the ExternalDNS Operator to delete the Route53 entries. Verify that the Route53 entries are clean by running this script: function clean_routes() { if [[ -z "USD{1}" ]];then echo "Give me the NS where to clean the routes" exit 1 fi # Constants if [[ -z "USD{2}" ]];then echo "Give me the Route53 zone ID" exit 1 fi ZONE_ID=USD{2} ROUTES=10 timeout=40 count=0 # This allows us to remove the ownership in the AWS for the API route oc delete route -n USD{1} --all while [ USD{ROUTES} -gt 2 ] do echo "Waiting for ExternalDNS Operator to clean the DNS Records in AWS Route53 where the zone id is: USD{ZONE_ID}..." echo "Try: (USD{count}/USD{timeout})" sleep 10 if [[ USDcount -eq timeout ]];then echo "Timeout waiting for cleaning the Route53 DNS records" exit 1 fi count=USD((count+1)) ROUTES=USD(aws route53 list-resource-record-sets --hosted-zone-id USD{ZONE_ID} --max-items 10000 --output json | grep -c USD{EXTERNAL_DNS_DOMAIN}) done } # SAMPLE: clean_routes "<HC ControlPlane Namespace>" "<AWS_ZONE_ID>" clean_routes "USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}" "USD{AWS_ZONE_ID}" Verification Check all of the OpenShift Container Platform objects and the S3 bucket to verify that everything looks as expected. steps Restore your hosted cluster. 8.4.3. Restoring a hosted cluster Gather all of the objects that you backed up and restore them in your destination management cluster. Prerequisites You backed up the data from your source management cluster. Tip Ensure that the kubeconfig file of the destination management cluster is placed as it is set in the KUBECONFIG variable or, if you use the script, in the MGMT2_KUBECONFIG variable. Use export KUBECONFIG=<Kubeconfig FilePath> or, if you use the script, use export KUBECONFIG=USD{MGMT2_KUBECONFIG} . Procedure Verify that the new management cluster does not contain any namespaces from the cluster that you are restoring by entering these commands: # Just in case USD export KUBECONFIG=USD{MGMT2_KUBECONFIG} USD BACKUP_DIR=USD{HC_CLUSTER_DIR}/backup # Namespace deletion in the destination Management cluster USD oc delete ns USD{HC_CLUSTER_NS} || true USD oc delete ns USD{HC_CLUSTER_NS}-{HC_CLUSTER_NAME} || true Re-create the deleted namespaces by entering these commands: # Namespace creation USD oc new-project USD{HC_CLUSTER_NS} USD oc new-project USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} Restore the secrets in the HC namespace by entering this command: USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-* Restore the objects in the HostedCluster control plane namespace by entering these commands: # Secrets USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-* # Cluster USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-* If you are recovering the nodes and the node pool to reuse AWS instances, restore the objects in the HC control plane namespace by entering these commands: # AWS USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-* # Machines USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-* Restore the etcd data and the hosted cluster by running this bash script: ETCD_PODS="etcd-0" if [ "USD{CONTROL_PLANE_AVAILABILITY_POLICY}" = "HighlyAvailable" ]; then ETCD_PODS="etcd-0 etcd-1 etcd-2" fi HC_RESTORE_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-restore.yaml HC_BACKUP_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml HC_NEW_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-new.yaml cat USD{HC_BACKUP_FILE} > USD{HC_NEW_FILE} cat > USD{HC_RESTORE_FILE} <<EOF restoreSnapshotURL: EOF for POD in USD{ETCD_PODS}; do # Create a pre-signed URL for the etcd snapshot ETCD_SNAPSHOT="s3://USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db" ETCD_SNAPSHOT_URL=USD(AWS_DEFAULT_REGION=USD{MGMT2_REGION} aws s3 presign USD{ETCD_SNAPSHOT}) # FIXME no CLI support for restoreSnapshotURL yet cat >> USD{HC_RESTORE_FILE} <<EOF - "USD{ETCD_SNAPSHOT_URL}" EOF done cat USD{HC_RESTORE_FILE} if ! grep USD{HC_CLUSTER_NAME}-snapshot.db USD{HC_NEW_FILE}; then sed -i '' -e "/type: PersistentVolume/r USD{HC_RESTORE_FILE}" USD{HC_NEW_FILE} sed -i '' -e '/pausedUntil:/d' USD{HC_NEW_FILE} fi HC=USD(oc get hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} -o name || true) if [[ USD{HC} == "" ]];then echo "Deploying HC Cluster: USD{HC_CLUSTER_NAME} in USD{HC_CLUSTER_NS} namespace" oc apply -f USD{HC_NEW_FILE} else echo "HC Cluster USD{HC_CLUSTER_NAME} already exists, avoiding step" fi If you are recovering the nodes and the node pool to reuse AWS instances, restore the node pool by entering this command: USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-* Verification To verify that the nodes are fully restored, use this function: timeout=40 count=0 NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c "worker") || NODE_STATUS=0 while [ USD{NODE_POOL_REPLICAS} != USD{NODE_STATUS} ] do echo "Waiting for Nodes to be Ready in the destination MGMT Cluster: USD{MGMT2_CLUSTER_NAME}" echo "Try: (USD{count}/USD{timeout})" sleep 30 if [[ USDcount -eq timeout ]];then echo "Timeout waiting for Nodes in the destination MGMT Cluster" exit 1 fi count=USD((count+1)) NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c "worker") || NODE_STATUS=0 done steps Shut down and delete your cluster. 8.4.4. Deleting a hosted cluster from your source management cluster After you back up your hosted cluster and restore it to your destination management cluster, you shut down and delete the hosted cluster on your source management cluster. Prerequisites You backed up your data and restored it to your source management cluster. Tip Ensure that the kubeconfig file of the destination management cluster is placed as it is set in the KUBECONFIG variable or, if you use the script, in the MGMT_KUBECONFIG variable. Use export KUBECONFIG=<Kubeconfig FilePath> or, if you use the script, use export KUBECONFIG=USD{MGMT_KUBECONFIG} . Procedure Scale the deployment and statefulset objects by entering these commands: Important Do not scale the stateful set if the value of its spec.persistentVolumeClaimRetentionPolicy.whenScaled field is set to Delete , because this could lead to a loss of data. As a workaround, update the value of the spec.persistentVolumeClaimRetentionPolicy.whenScaled field to Retain . Ensure that no controllers exist that reconcile the stateful set and would return the value back to Delete , which could lead to a loss of data. # Just in case USD export KUBECONFIG=USD{MGMT_KUBECONFIG} # Scale down deployments USD oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all USD oc scale statefulset.apps -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all USD sleep 15 Delete the NodePool objects by entering these commands: NODEPOOLS=USD(oc get nodepools -n USD{HC_CLUSTER_NS} -o=jsonpath='{.items[?(@.spec.clusterName=="'USD{HC_CLUSTER_NAME}'")].metadata.name}') if [[ ! -z "USD{NODEPOOLS}" ]];then oc patch -n "USD{HC_CLUSTER_NS}" nodepool USD{NODEPOOLS} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' oc delete np -n USD{HC_CLUSTER_NS} USD{NODEPOOLS} fi Delete the machine and machineset objects by entering these commands: # Machines for m in USD(oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done USD oc delete machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all || true Delete the cluster object by entering these commands: # Cluster USD C_NAME=USD(oc get cluster -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) USD oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{C_NAME} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' USD oc delete cluster.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all Delete the AWS machines (Kubernetes objects) by entering these commands. Do not worry about deleting the real AWS machines. The cloud instances will not be affected. # AWS Machines for m in USD(oc get awsmachine.infrastructure.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done Delete the HostedControlPlane and ControlPlane HC namespace objects by entering these commands: # Delete HCP and ControlPlane HC NS USD oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} hostedcontrolplane.hypershift.openshift.io USD{HC_CLUSTER_NAME} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' USD oc delete hostedcontrolplane.hypershift.openshift.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all USD oc delete ns USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} || true Delete the HostedCluster and HC namespace objects by entering these commands: # Delete HC and HC Namespace USD oc -n USD{HC_CLUSTER_NS} patch hostedclusters USD{HC_CLUSTER_NAME} -p '{"metadata":{"finalizers":null}}' --type merge || true USD oc delete hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} || true USD oc delete ns USD{HC_CLUSTER_NS} || true Verification To verify that everything works, enter these commands: # Validations USD export KUBECONFIG=USD{MGMT2_KUBECONFIG} USD oc get hc -n USD{HC_CLUSTER_NS} USD oc get np -n USD{HC_CLUSTER_NS} USD oc get pod -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} # Inside the HostedCluster USD export KUBECONFIG=USD{HC_KUBECONFIG} USD oc get clusterversion USD oc get nodes steps Delete the OVN pods in the hosted cluster so that you can connect to the new OVN control plane that runs in the new management cluster: Load the KUBECONFIG environment variable with the hosted cluster's kubeconfig path. Enter this command: USD oc delete pod -n openshift-ovn-kubernetes --all | [
"oc rsh -n openshift-etcd -c etcd <etcd_pod_name>",
"sh-4.4# etcdctl endpoint status -w table",
"+------------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +------------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.1xxx.20:2379 | 8fxxxxxxxxxx | 3.5.12 | 123 MB | false | false | 10 | 180156 | 180156 | | | https://192.168.1xxx.21:2379 | a5xxxxxxxxxx | 3.5.12 | 122 MB | false | false | 10 | 180156 | 180156 | | | https://192.168.1xxx.22:2379 | 7cxxxxxxxxxx | 3.5.12 | 124 MB | true | false | 10 | 180156 | 180156 | | +-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"oc get pods -l app=etcd -n openshift-etcd",
"NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 64m etcd-1 2/2 Running 0 45m etcd-2 1/2 CrashLoopBackOff 1 (5s ago) 64m",
"oc delete pods etcd-2 -n openshift-etcd",
"oc get pods -l app=etcd -n openshift-etcd",
"NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 67m etcd-1 2/2 Running 0 48m etcd-2 2/2 Running 0 2m2s",
"CLUSTER_NAME=my-cluster",
"HOSTED_CLUSTER_NAMESPACE=clusters",
"CONTROL_PLANE_NAMESPACE=\"USD{HOSTED_CLUSTER_NAMESPACE}-USD{CLUSTER_NAME}\"",
"oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/kube-apiserver --replicas=0",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/openshift-apiserver --replicas=0",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/openshift-oauth-apiserver --replicas=0",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd",
"ETCD_POD=etcd-0",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=https://localhost:2379 snapshot save /var/lib/snapshot.db",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/snapshot.db",
"oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/snapshot.db /tmp/etcd.snapshot.db",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd",
"oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/data/member/snap/db /tmp/etcd.snapshot.db",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=0",
"oc delete -n USD{CONTROL_PLANE_NAMESPACE} pvc/data-etcd-1 pvc/data-etcd-2",
"ETCD_IMAGE=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd -o jsonpath='{ .spec.template.spec.containers[0].image }')",
"cat << EOF | oc apply -n USD{CONTROL_PLANE_NAMESPACE} -f - apiVersion: apps/v1 kind: Deployment metadata: name: etcd-data spec: replicas: 1 selector: matchLabels: app: etcd-data template: metadata: labels: app: etcd-data spec: containers: - name: access image: USDETCD_IMAGE volumeMounts: - name: data mountPath: /var/lib command: - /usr/bin/bash args: - -c - |- while true; do sleep 1000 done volumes: - name: data persistentVolumeClaim: claimName: data-etcd-0 EOF",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data",
"DATA_POD=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} pods --no-headers -l app=etcd-data -o name | cut -d/ -f2)",
"oc cp /tmp/etcd.snapshot.db USD{CONTROL_PLANE_NAMESPACE}/USD{DATA_POD}:/var/lib/restored.snap.db",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm -rf /var/lib/data",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- mkdir -p /var/lib/data",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- etcdutl snapshot restore /var/lib/restored.snap.db --data-dir=/var/lib/data --skip-hash-check --name etcd-0 --initial-cluster-token=etcd-cluster --initial-cluster etcd-0=https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-1=https://etcd-1.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-2=https://etcd-2.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380 --initial-advertise-peer-urls https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm /var/lib/restored.snap.db",
"oc delete -n USD{CONTROL_PLANE_NAMESPACE} deployment/etcd-data",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=3",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd -w",
"oc scale deployment -n USD{CONTROL_PLANE_NAMESPACE} --replicas=3 kube-apiserver openshift-apiserver openshift-oauth-apiserver",
"oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"\"}}' --type=merge",
"oc patch -n clusters hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge",
"oc scale deployment -n <hosted_cluster_namespace> --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver",
"oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db",
"oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db",
"BUCKET_NAME=somebucket FILEPATH=\"/USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db\" CONTENT_TYPE=\"application/x-compressed-tar\" DATE_VALUE=`date -R` SIGNATURE_STRING=\"PUT\\n\\nUSD{CONTENT_TYPE}\\nUSD{DATE_VALUE}\\nUSD{FILEPATH}\" ACCESS_KEY=accesskey SECRET_KEY=secret SIGNATURE_HASH=`echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac USD{SECRET_KEY} -binary | base64` exec -it etcd-0 -n USD{HOSTED_CLUSTER_NAMESPACE} -- curl -X PUT -T \"/var/lib/data/snapshot.db\" -H \"Host: USD{BUCKET_NAME}.s3.amazonaws.com\" -H \"Date: USD{DATE_VALUE}\" -H \"Content-Type: USD{CONTENT_TYPE}\" -H \"Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}\" https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{CLUSTER_NAME}-snapshot.db",
"oc get hostedcluster <hosted_cluster_name> -o=jsonpath='{.spec.secretEncryption.aescbc}' {\"activeKey\":{\"name\":\"<hosted_cluster_name>-etcd-encryption-key\"}}",
"oc get secret <hosted_cluster_name>-etcd-encryption-key -o=jsonpath='{.data.key}'",
"ETCD_SNAPSHOT=USD{ETCD_SNAPSHOT:-\"s3://USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db\"} ETCD_SNAPSHOT_URL=USD(aws s3 presign USD{ETCD_SNAPSHOT})",
"spec: etcd: managed: storage: persistentVolume: size: 4Gi type: PersistentVolume restoreSnapshotURL: - \"USD{ETCD_SNAPSHOT_URL}\" managementType: Managed",
"--external-dns-provider=aws --external-dns-credentials=<path_to_aws_credentials_file> --external-dns-domain-filter=<basedomain>",
"oc create configmap mgmt-parent-cluster -n default --from-literal=from=USD{MGMT_CLUSTER_NAME}",
"PAUSED_UNTIL=\"true\" oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator",
"PAUSED_UNTIL=\"true\" oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc patch -n USD{HC_CLUSTER_NS} nodepools/USD{NODEPOOLS} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator",
"ETCD Backup ETCD_PODS=\"etcd-0\" if [ \"USD{CONTROL_PLANE_AVAILABILITY_POLICY}\" = \"HighlyAvailable\" ]; then ETCD_PODS=\"etcd-0 etcd-1 etcd-2\" fi for POD in USD{ETCD_PODS}; do # Create an etcd snapshot oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db FILEPATH=\"/USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db\" CONTENT_TYPE=\"application/x-compressed-tar\" DATE_VALUE=`date -R` SIGNATURE_STRING=\"PUT\\n\\nUSD{CONTENT_TYPE}\\nUSD{DATE_VALUE}\\nUSD{FILEPATH}\" set +x ACCESS_KEY=USD(grep aws_access_key_id USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed \"s/ //g\") SECRET_KEY=USD(grep aws_secret_access_key USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed \"s/ //g\") SIGNATURE_HASH=USD(echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac \"USD{SECRET_KEY}\" -binary | base64) set -x # FIXME: this is pushing to the OIDC bucket oc exec -it etcd-0 -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- curl -X PUT -T \"/var/lib/data/snapshot.db\" -H \"Host: USD{BUCKET_NAME}.s3.amazonaws.com\" -H \"Date: USD{DATE_VALUE}\" -H \"Content-Type: USD{CONTENT_TYPE}\" -H \"Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}\" https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db done",
"mkdir -p USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS} USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} chmod 700 USD{BACKUP_DIR}/namespaces/ HostedCluster echo \"Backing Up HostedCluster Objects:\" oc get hc USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml echo \"--> HostedCluster\" sed -i '' -e '/^status:USD/,USDd' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml NodePool oc get np USD{NODEPOOLS} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml echo \"--> NodePool\" sed -i '' -e '/^status:USD/,USD d' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml Secrets in the HC Namespace echo \"--> HostedCluster Secrets:\" for s in USD(oc get secret -n USD{HC_CLUSTER_NS} | grep \"^USD{HC_CLUSTER_NAME}\" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-USD{s}.yaml done Secrets in the HC Control Plane Namespace echo \"--> HostedCluster ControlPlane Secrets:\" for s in USD(oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} | egrep -v \"docker|service-account-token|oauth-openshift|NAME|token-USD{HC_CLUSTER_NAME}\" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-USD{s}.yaml done Hosted Control Plane echo \"--> HostedControlPlane:\" oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-USD{HC_CLUSTER_NAME}.yaml Cluster echo \"--> Cluster:\" CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\\*} | grep USD{HC_CLUSTER_NAME}) oc get cluster USD{CL_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-USD{HC_CLUSTER_NAME}.yaml AWS Cluster echo \"--> AWS Cluster:\" oc get awscluster USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-USD{HC_CLUSTER_NAME}.yaml AWS MachineTemplate echo \"--> AWS Machine Template:\" oc get awsmachinetemplate USD{NODEPOOLS} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-USD{HC_CLUSTER_NAME}.yaml AWS Machines echo \"--> AWS Machine:\" CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\\*} | grep USD{HC_CLUSTER_NAME}) for s in USD(oc get awsmachines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --no-headers | grep USD{CL_NAME} | cut -f1 -d\\ ); do oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} awsmachines USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-USD{s}.yaml done MachineDeployments echo \"--> HostedCluster MachineDeployments:\" for s in USD(oc get machinedeployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do mdp_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-USD{mdp_name}.yaml done MachineSets echo \"--> HostedCluster MachineSets:\" for s in USD(oc get machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do ms_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-USD{ms_name}.yaml done Machines echo \"--> HostedCluster Machine:\" for s in USD(oc get machine -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do m_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-USD{m_name}.yaml done",
"oc delete routes -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all",
"function clean_routes() { if [[ -z \"USD{1}\" ]];then echo \"Give me the NS where to clean the routes\" exit 1 fi # Constants if [[ -z \"USD{2}\" ]];then echo \"Give me the Route53 zone ID\" exit 1 fi ZONE_ID=USD{2} ROUTES=10 timeout=40 count=0 # This allows us to remove the ownership in the AWS for the API route oc delete route -n USD{1} --all while [ USD{ROUTES} -gt 2 ] do echo \"Waiting for ExternalDNS Operator to clean the DNS Records in AWS Route53 where the zone id is: USD{ZONE_ID}...\" echo \"Try: (USD{count}/USD{timeout})\" sleep 10 if [[ USDcount -eq timeout ]];then echo \"Timeout waiting for cleaning the Route53 DNS records\" exit 1 fi count=USD((count+1)) ROUTES=USD(aws route53 list-resource-record-sets --hosted-zone-id USD{ZONE_ID} --max-items 10000 --output json | grep -c USD{EXTERNAL_DNS_DOMAIN}) done } SAMPLE: clean_routes \"<HC ControlPlane Namespace>\" \"<AWS_ZONE_ID>\" clean_routes \"USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}\" \"USD{AWS_ZONE_ID}\"",
"Just in case export KUBECONFIG=USD{MGMT2_KUBECONFIG} BACKUP_DIR=USD{HC_CLUSTER_DIR}/backup Namespace deletion in the destination Management cluster oc delete ns USD{HC_CLUSTER_NS} || true oc delete ns USD{HC_CLUSTER_NS}-{HC_CLUSTER_NAME} || true",
"Namespace creation oc new-project USD{HC_CLUSTER_NS} oc new-project USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}",
"oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-*",
"Secrets oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-* Cluster oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-*",
"AWS oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-* Machines oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-*",
"ETCD_PODS=\"etcd-0\" if [ \"USD{CONTROL_PLANE_AVAILABILITY_POLICY}\" = \"HighlyAvailable\" ]; then ETCD_PODS=\"etcd-0 etcd-1 etcd-2\" fi HC_RESTORE_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-restore.yaml HC_BACKUP_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml HC_NEW_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-new.yaml cat USD{HC_BACKUP_FILE} > USD{HC_NEW_FILE} cat > USD{HC_RESTORE_FILE} <<EOF restoreSnapshotURL: EOF for POD in USD{ETCD_PODS}; do # Create a pre-signed URL for the etcd snapshot ETCD_SNAPSHOT=\"s3://USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db\" ETCD_SNAPSHOT_URL=USD(AWS_DEFAULT_REGION=USD{MGMT2_REGION} aws s3 presign USD{ETCD_SNAPSHOT}) # FIXME no CLI support for restoreSnapshotURL yet cat >> USD{HC_RESTORE_FILE} <<EOF - \"USD{ETCD_SNAPSHOT_URL}\" EOF done cat USD{HC_RESTORE_FILE} if ! grep USD{HC_CLUSTER_NAME}-snapshot.db USD{HC_NEW_FILE}; then sed -i '' -e \"/type: PersistentVolume/r USD{HC_RESTORE_FILE}\" USD{HC_NEW_FILE} sed -i '' -e '/pausedUntil:/d' USD{HC_NEW_FILE} fi HC=USD(oc get hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} -o name || true) if [[ USD{HC} == \"\" ]];then echo \"Deploying HC Cluster: USD{HC_CLUSTER_NAME} in USD{HC_CLUSTER_NS} namespace\" oc apply -f USD{HC_NEW_FILE} else echo \"HC Cluster USD{HC_CLUSTER_NAME} already exists, avoiding step\" fi",
"oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-*",
"timeout=40 count=0 NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c \"worker\") || NODE_STATUS=0 while [ USD{NODE_POOL_REPLICAS} != USD{NODE_STATUS} ] do echo \"Waiting for Nodes to be Ready in the destination MGMT Cluster: USD{MGMT2_CLUSTER_NAME}\" echo \"Try: (USD{count}/USD{timeout})\" sleep 30 if [[ USDcount -eq timeout ]];then echo \"Timeout waiting for Nodes in the destination MGMT Cluster\" exit 1 fi count=USD((count+1)) NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c \"worker\") || NODE_STATUS=0 done",
"Just in case export KUBECONFIG=USD{MGMT_KUBECONFIG} Scale down deployments oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all oc scale statefulset.apps -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all sleep 15",
"NODEPOOLS=USD(oc get nodepools -n USD{HC_CLUSTER_NS} -o=jsonpath='{.items[?(@.spec.clusterName==\"'USD{HC_CLUSTER_NAME}'\")].metadata.name}') if [[ ! -z \"USD{NODEPOOLS}\" ]];then oc patch -n \"USD{HC_CLUSTER_NS}\" nodepool USD{NODEPOOLS} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete np -n USD{HC_CLUSTER_NS} USD{NODEPOOLS} fi",
"Machines for m in USD(oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done oc delete machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all || true",
"Cluster C_NAME=USD(oc get cluster -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{C_NAME} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete cluster.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all",
"AWS Machines for m in USD(oc get awsmachine.infrastructure.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done",
"Delete HCP and ControlPlane HC NS oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} hostedcontrolplane.hypershift.openshift.io USD{HC_CLUSTER_NAME} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete hostedcontrolplane.hypershift.openshift.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all oc delete ns USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} || true",
"Delete HC and HC Namespace oc -n USD{HC_CLUSTER_NS} patch hostedclusters USD{HC_CLUSTER_NAME} -p '{\"metadata\":{\"finalizers\":null}}' --type merge || true oc delete hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} || true oc delete ns USD{HC_CLUSTER_NS} || true",
"Validations export KUBECONFIG=USD{MGMT2_KUBECONFIG} oc get hc -n USD{HC_CLUSTER_NS} oc get np -n USD{HC_CLUSTER_NS} oc get pod -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} Inside the HostedCluster export KUBECONFIG=USD{HC_KUBECONFIG} oc get clusterversion oc get nodes",
"oc delete pod -n openshift-ovn-kubernetes --all"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/hosted_control_planes/high-availability-for-hosted-control-planes |
Part VIII. Apache CXF Features | Part VIII. Apache CXF Features This guide describes how to enable various advanced features of Apache CXF. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/cxffeatures |
4.2. Log Files | 4.2. Log Files 4.2.1. Manager Installation Log Files Table 4.2. Installation Log File Description /var/log/ovirt-engine/engine-cleanup _yyyy_mm_dd_hh_mm_ss .log Log from the engine-cleanup command. This is the command used to reset a Red Hat Virtualization Manager installation. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist. /var/log/ovirt-engine/engine-db-install- yyyy_mm_dd_hh_mm_ss .log Log from the engine-setup command detailing the creation and configuration of the engine database. /var/log/ovirt-engine/ovirt-engine-dwh-setup- yyyy_mm_dd_hh_mm_ss .log Log from the ovirt-engine-dwh-setup command. This is the command used to create the ovirt_engine_history database for reporting. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist concurrently. /var/log/ovirt-engine/setup/ovirt-engine-setup- yyyymmddhhmmss .log Log from the engine-setup command. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist concurrently. 4.2.2. Red Hat Virtualization Manager Log Files Table 4.3. Service Activity Log File Description /var/log/ovirt-engine/engine.log Reflects all Red Hat Virtualization Manager GUI crashes, Active Directory lookups, Database issues, and other events. /var/log/ovirt-engine/host-deploy Log files from hosts deployed from the Red Hat Virtualization Manager. /var/lib/ovirt-engine/setup-history.txt Tracks the installation and upgrade of packages associated with the Red Hat Virtualization Manager. /var/log/httpd/ovirt-requests-log Logs files from requests made to the Red Hat Virtualization Manager via HTTPS, including how long each request took. A Correlation-Id header is included to allow you to compare requests when comparing a log file with /var/log/ovirt-engine/engine.log . /var/log/ovn-provider/ovirt-provider-ovn.log Logs the activities of the OVN provider. For information about Open vSwitch logs, see the Open vSwitch documentation . 4.2.3. SPICE Log Files SPICE log files are useful when troubleshooting SPICE connection issues. To start SPICE debugging, change the log level to debugging . Then, identify the log location. Both the clients used to access the guest machines and the guest machines themselves have SPICE log files. For client-side logs, if a SPICE client was launched using the native client, for which a console.vv file is downloaded, use the remote-viewer command to enable debugging and generate log output. 4.2.3.1. SPICE Logs for Hypervisor SPICE Servers Table 4.4. SPICE Logs for Hypervisor SPICE Servers Log Type Log Location To Change Log Level: Host/Hypervisor SPICE Server /var/log/libvirt/qemu/(guest_name).log Run export SPICE_DEBUG_LEVEL=5 on the host/hypervisor prior to launching the guest. This variable is parsed by QEMU, and if run system-wide will print the debugging information of all virtual machines on the system. This command must be run on each host in the cluster. This command works only on a per-host/hypervisor basis, not a per-cluster basis. 4.2.3.2. SPICE Logs for Guest Machines Table 4.5. spice-vdagent Logs for Guest Machines Log Type Log Location To Change Log Level: Windows Guest C:\Windows\Temp\vdagent.log C:\Windows\Temp\vdservice.log Not applicable Red Hat Enterprise Linux Guest Use journalctl as the root user. To run the spice-vdagentd service in debug mode, as the root user create a /etc/sysconfig/spice-vdagentd file with this entry: SPICE_VDAGENTD_EXTRA_ARGS="-d -d" To run spice-vdagent in debug mode, from the command line: 4.2.3.3. SPICE Logs for SPICE Clients Launched Using console.vv Files For Linux client machines: Enable SPICE debugging by running the remote-viewer command with the --spice-debug option. When prompted, enter the connection URL, for example, spice:// virtual_machine_IP : port . # remote-viewer --spice-debug To run SPICE client with the debug parameter and to pass a .vv file to it, download the console.vv file and run the remote-viewer command with the --spice-debug option and specify the full path to the console.vv file. # remote-viewer --spice-debug /path/to/ console.vv For Windows client machines: In versions of virt-viewer 2.0-11.el7ev and later, virt-viewer.msi installs virt-viewer and debug-viewer.exe . Run the remote-viewer command with the spice-debug argument and direct the command at the path to the console: remote-viewer --spice-debug path\to\ console.vv To view logs, connect to the virtual machine, and you will see a command prompt running GDB that prints standard output and standard error of remote-viewer . 4.2.4. Host Log Files Log File Description /var/log/messages The log file used by libvirt . Use journalctl to view the log. You must be a member of the adm , systemd-journal , or wheel groups to view the log. /var/log/vdsm/spm-lock.log Log file detailing the host's ability to obtain a lease on the Storage Pool Manager role. The log details when the host has acquired, released, renewed, or failed to renew the lease. /var/log/vdsm/vdsm.log Log file for VDSM, the Manager's agent on the host(s). /tmp/ovirt-host-deploy- Date .log A host deployment log that is copied to the Manager as /var/log/ovirt-engine/host-deploy/ovirt- Date-Host-Correlation_ID .log after the host has been successfully deployed. /var/log/vdsm/import/import- UUID-Date .log Log file detailing virtual machine imports from a KVM host, a VMWare provider, or a RHEL 5 Xen host, including import failure information. UUID is the UUID of the virtual machine that was imported and Date is the date and time that the import began. /var/log/vdsm/supervdsm.log Logs VDSM tasks that were executed with superuser permissions. /var/log/vdsm/upgrade.log VDSM uses this log file during host upgrades to log configuration changes. /var/log/vdsm/mom.log Logs the activities of the VDSM's memory overcommitment manager. 4.2.5. Setting debug-level logging for Red Hat Virtualization services Note Setting logging to debug-level may expose sensitive information such as passwords or internal VM data. Make sure that non-trusted or unauthorized users do not have access to debug logs. You can set the logs of the following Red Hat Virtualization (RHV) services to debug-level by modifying the sysconfig file of each service. Table 4.6. RHV services and sysconfig file paths Service File path ovirt-engine.service /etc/sysconfig/ovirt-engine ovirt-engine-dwhd.service /etc/sysconfig/ovirt-engine-dwhd ovirt-fence-kdump-listener.service /etc/sysconfig/ovirt-fence-kdump-listener ovirt-websocket-proxy.service /etc/sysconfig/ovirt-websocket-proxy This modification affects logging done by the Python wrapper, not the main service process. Setting logging to debug-level is useful for debugging issues related to start up - for example, if the main process fails to start due to a missing or incorrect Java runtime or library. Prerequisites Verify that the sysconfig file you want to modify exists. If necessary, create it. Procedure Add the following to the sysconfig file of the service: OVIRT_SERVICE_DEBUG=1 Restart the service: # systemctl restart <service> The sysconfig log file of the service is now set to debug-level. Logging caused by this setting goes to the system log, so the logs it generates can be found in /var/log/messages , not in the service-specific log file, or by using the journalctl command. 4.2.6. Main configuration files for Red Hat Virtualization services In addition to a sysconfig file, each of these Red Hat Virtualization (RHV) services has another configuration file that is used more often. Table 4.7. RHV services and configuration files Service sysconfig file path Main configuration file ovirt-engine.service /etc/sysconfig/ovirt-engine /etc/ovirt-engine/engine.conf.d/*.conf ovirt-engine-dwhd.service /etc/sysconfig/ovirt-engine-dwhd /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*.conf ovirt-fence-kdump-listener.service /etc/sysconfig/ovirt-fence-kdump-listener /etc/ovirt-engine/ovirt-fence-kdump-listener.conf.d/*.conf ovirt-websocket-proxy.service /etc/sysconfig/ovirt-websocket-proxy /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/*.conf 4.2.7. Setting Up a Host Logging Server Hosts generate and update log files, recording their actions and problems. Collecting these log files centrally simplifies debugging. This procedure should be used on your centralized log server. You could use a separate logging server, or use this procedure to enable host logging on the Red Hat Virtualization Manager. Procedure Check to see if the firewall allows traffic on the UDP 514 port, and is open to syslog service traffic: # firewall-cmd --query-service=syslog If the output is no , allow traffic on the UDP 514 port with: Create a new .conf file on the syslog server, for example, /etc/rsyslog.d/from_remote.conf , and add the following lines: template(name="DynFile" type="string" string="/var/log/%HOSTNAME%/%PROGRAMNAME%.log") RuleSet(name="RemoteMachine"){ action(type="omfile" dynaFile="DynFile") } Module(load="imudp") Input(type="imudp" port="514" ruleset="RemoteMachine") Restart the rsyslog service: # systemctl restart rsyslog.service Log in to the hypervisor, and in the /etc/rsyslog.conf add the following line: *.info;mail.none;authpriv.none;cron.none @<syslog-FQDN>:514 Restart the rsyslog service on the hypervisor. # systemctl restart rsyslog.service Your centralized log server is now configured to receive and store the messages and secure logs from your virtualization hosts. 4.2.8. Enabling SyslogHandler to pass RHV Manager logs to a remote syslog server This implementation uses the JBoss EAP SyslogHandler log manager and enables passing log records from the engine.log and server.log to a syslog server. Note RHV versions earlier than RHV 4.4.10 featured similar functionality provided by ovirt-engine-extension-logger-log4j . That package was removed in RHV 4.4.10 and replaced by a new implementation using the JBoss EAP SyslogHandler log manager. If you have been using ovirt-engine-extension-logger-log4j in earlier RHV versions, following an upgrade to RHV 4.4.10, perform following steps: Manually configure sending log records to a remote syslog server using the guidelines provided in this chapter. Manually remove the ovirt-engine-extension-logger-log4j configuration files (remove the /etc/ovirt-engine/extensions.d/Log4jLogger.properties configuration file). Use this procedure on the central syslog server. You can use a separate logging server, or use this procedure to pass the engine.log and server.log files from the Manager to the syslog server. See also the configuration procedure Setting up a Host Logging Server . Configuring the SyslogHandler implementation Create the configuration file 90-syslog.conf in the /etc/ovirt-engine/engine.conf.d directory and add the following content: Install and configure rsyslog . Configure SELinux to allow rsyslog traffic. Create the configuration file /etc/rsyslog.d/rhvm.conf and add the following content: Restart the rsyslog service. If the firewall is enabled and active, run the following command to add the necessary rules for opening the rsyslog ports in Firewalld : Restart Red Hat Virtualization Manager. The syslog server can now receive and store the engine.log files. | [
"killall - u USDUSER spice-vdagent spice-vdagent -x -d [-d] [ |& tee spice-vdagent.log ]",
"remote-viewer --spice-debug",
"remote-viewer --spice-debug /path/to/ console.vv",
"remote-viewer --spice-debug path\\to\\ console.vv",
"OVIRT_SERVICE_DEBUG=1",
"systemctl restart <service>",
"firewall-cmd --query-service=syslog",
"firewall-cmd --add-service=syslog --permanent firewall-cmd --reload",
"template(name=\"DynFile\" type=\"string\" string=\"/var/log/%HOSTNAME%/%PROGRAMNAME%.log\") RuleSet(name=\"RemoteMachine\"){ action(type=\"omfile\" dynaFile=\"DynFile\") } Module(load=\"imudp\") Input(type=\"imudp\" port=\"514\" ruleset=\"RemoteMachine\")",
"systemctl restart rsyslog.service",
"*.info;mail.none;authpriv.none;cron.none @<syslog-FQDN>:514",
"systemctl restart rsyslog.service",
"SYSLOG_HANDLER_ENABLED=true SYSLOG_HANDLER_SERVER_HOSTNAME=localhost SYSLOG_HANDLER_FACILITY=USER_LEVEL",
"dnf install rsyslog",
"semanage port -a -t syslogd_port_t -p udp 514",
"user.* /var/log/jboss.log module(load=\"imudp\") # needs to be done just once input(type=\"imudp\" port=\"514\")",
"systemctl restart rsyslog.service",
"firewall-cmd --permanent --add-port=514/udp firewall-cmd --reload",
"systemctl restart ovirt-engine"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-log_files |
Chapter 14. Log Record Fields | Chapter 14. Log Record Fields The following fields can be present in log records exported by the logging. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings. To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch /_search URL , to look for a Kubernetes pod name, use /_search/q=kubernetes.pod_name:name-of-my-pod . The top level fields may be present in every record. message The original log entry text, UTF-8 encoded. This field may be absent or empty if a non-empty structured field is present. See the description of structured for more. Data type text Example value HAPPY structured Original log entry as a structured object. This field may be present if the forwarder was configured to parse structured JSON logs. If the original log entry was a valid structured log, this field will contain an equivalent JSON structure. Otherwise this field will be empty or absent, and the message field will contain the original log message. The structured field can have any subfields that are included in the log message, there are no restrictions defined here. Data type group Example value map[message:starting fluentd worker pid=21631 ppid=21618 worker=0 pid:21631 ppid:21618 worker:0] @timestamp A UTC value that marks when the log payload was created or, if the creation time is not known, when the log payload was first collected. The "@" prefix denotes a field that is reserved for a particular use. By default, most tools look for "@timestamp" with ElasticSearch. Data type date Example value 2015-01-24 14:06:05.071000000 Z hostname The name of the host where this log message originated. In a Kubernetes cluster, this is the same as kubernetes.host . Data type keyword ipaddr4 The IPv4 address of the source server. Can be an array. Data type ip ipaddr6 The IPv6 address of the source server, if available. Can be an array. Data type ip level The logging level from various sources, including rsyslog(severitytext property) , a Python logging module, and others. The following values come from syslog.h , and are preceded by their numeric equivalents : 0 = emerg , system is unusable. 1 = alert , action must be taken immediately. 2 = crit , critical conditions. 3 = err , error conditions. 4 = warn , warning conditions. 5 = notice , normal but significant condition. 6 = info , informational. 7 = debug , debug-level messages. The two following values are not part of syslog.h but are widely used: 8 = trace , trace-level messages, which are more verbose than debug messages. 9 = unknown , when the logging system gets a value it doesn't recognize. Map the log levels or priorities of other logging systems to their nearest match in the preceding list. For example, from python logging , you can match CRITICAL with crit , ERROR with err , and so on. Data type keyword Example value info pid The process ID of the logging entity, if available. Data type keyword service The name of the service associated with the logging entity, if available. For example, syslog's APP-NAME and rsyslog's programname properties are mapped to the service field. Data type keyword | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/logging/cluster-logging-exported-fields |
Chapter 7. Managing zones | Chapter 7. Managing zones The Red Hat OpenStack Platform (RHOSP) DNS service (designate) uses zones to break up the namespace into easily managed pieces. A user can create, modify, delete, export, and import zones provided that their RHOSP project owns the zone. The topics included in this section are: Section 7.1, "Zones in the DNS service" Section 7.2, "Creating a zone" Section 7.3, "Updating a zone" Section 7.4, "Deleting a zone" Section 7.5, "Exporting zones" Section 7.6, "Importing zones" Section 7.7, "Transferring zone ownership" Section 7.8, "Modifying zone transfer requests" 7.1. Zones in the DNS service The Red Hat OpenStack Platform (RHOSP) DNS service (designate) uses a similar zone ownership model as DNS, with two major differences. For example, in DNS, within the root zone ( . ) there are zones for each of the top level domains (TLDs) such as .org. and .com. . Within the TLD zones, there can be delegations to other zones, such as example.org. or example.com. that can be owned and managed by other organizations (or other sets of name servers). This example demonstrates a hierarchy of responsibility, with the higher-level zones composed mostly of delegations to the lower-level zones. Similar to DNS, with the RHOSP DNS service, a zone can be owned by only one tenant. However, unlike DNS, the DNS service does not support zone delegation between tenants. That is, a tenant cannot create a child zone whose parent zone is owned by a different tenant. The second difference between DNS and the RHOSP DNS service is that the DNS service manages TLDs differently than zones. The DNS service restricts tenants from creating zones that are not within a managed TLD. If the DNS service manages no TLDs, then tenants can create any TLD and any zone, other than the root zone. 7.2. Creating a zone Zones enable you to more easily manage namespaces. By default, any user can create Red Hat OpenStack Platform (RHOSP) DNS service (designate) zones. Prerequisites Your RHOSP project must own the zone in which you are creating a sub-zone, or the zone must be an allowed TLD. Procedure Source your credentials file. Example Create a zone by specifying a name for the zone and an email address of the person responsible for the zone. Example When you create a zone, the DNS service automatically creates two record sets: an SOA record and an NS record. Verification Confirm that your zone exists by running the openstack zone list command. Sample output Additional resources zone create in the Command line interface reference zone list in the Command line interface reference 7.3. Updating a zone There can be situations when you must update a zone managed by the Red Hat OpenStack Platform (RHOSP) DNS service (designate). For example, when you want to change the email address associated with the zone, or when you want to change the zone TTL (time to live) value. By default, any user can modify a zone. Prerequisites Your RHOSP project must own the zone that you are modifying. Procedure Source your credentials file. Example Modify the zone by specifying the name of the zone and the zone attributes that you want to change: --email <email_address> a valid email address for the person responsible (owner) for the zone. --ttl <seconds> (Time To Live) the duration, in seconds, that a DNS client- for example, a resolver, a web browser, an operating system- can cache a record before checking to see if it has updated. --description <string> | --no-description a string that describes the purpose of the zone. --masters <dns-server> [<dns-server> ...] the fully qualified domain name for the DNS server that is the primary instance- the instance that other DNS servers can sync from to become secondary servers. Example Verification Confirm that your modification to the zone succeeded. Example Additional resources zone set in the Command line interface reference zone show in the Command line interface reference 7.4. Deleting a zone You can remove zones managed by the Red Hat OpenStack Platform (RHOSP) DNS service (designate). For example, you would delete a zone when the zone name has changed. Prerequisites Your RHOSP project must own the zone that you are deleting. Procedure Source your credentials file. Example Delete the zone. Example Verification Confirm that your zone no longer exists by running the openstack zone list command. Additional resources zone delete in the Command line interface reference zone list in the Command line interface reference 7.5. Exporting zones Exporting zone data from the Red Hat OpenStack Platform (RHOSP) DNS service consists of creating a zone export resource that the DNS service stores internally by default. An example is, designate://v2/zones/tasks/exports/e75aef2c-b562-4cd9-a426-4a73f6cb82be/export . After you create the zone export data resource, you can then access its contents. Exporting zone data is a part of an overall backup strategy for protecting DNS information for your RHOSP deployment. Prerequisites Your RHOSP project must own the zone from which you are exporting data. Procedure Source your credentials file. Example Export the zone. Example Sample output Important After you create a zone export resource, the DNS service continues to update the resource with any later changes that are made to the zone. Record the zone export ID ( e75aef2c-b562-4cd9-a426-4a73f6cb82be ), because you must use it to verify your zone export, and to access the zone export data. Verification Confirm that the DNS service successfully created a zone export resource. Example Sample output The zone export create command creates a resource that the DNS service stores internally by default. Access the contents of the zone export file, by using the zone export ID that you obtained earlier. Tip Using the -f value option prints the contents of the zone file without any tabulation. You can also redirect the contents to a local text file, which can be useful if you want to modify the exported zone file locally and then import it back into the DNS service to update the zone. Example Sample output Additional resources Zone file format: RFC1034, section 3.6 RFC1035, section 5.1 zone export create in the Command line interface reference zone export show in the Command line interface reference zone export showfile in the Command line interface reference 7.6. Importing zones Importing zone data into the Red Hat OpenStack Platform (RHOSP) DNS service consists of running the openstack zone import command on a file that conforms to the DNS zone data file format, such as a file created from data produced by the openstack zone export showfile command. One reason to import data is when a user accidentally deletes a zone. Prerequisites Your RHOSP project must own the zone in which you are creating a sub-zone, or the zone must be an allowed TLD. The zone you are importing must not exist already. The zone data that you are importing must contain a zone TTL (time to live) value. Procedure Source your credentials file. Example List the zones on your system: If a the zone that you want to import already exists, you must delete it first by running the openstack zone delete command. Example Confirm that your zone no longer exists by listing the zones on your system: Confirm that the zone data you are planning to import contains a zone TTL value. Example Sample output Import a valid zone data file. Example Verification Confirm that the DNS service successfully imported the zone. Example Sample output Additional resources Zone file format: RFC1034, section 3.6 RFC1035, section 5.1 zone import create in the Command line interface reference zone list in the Command line interface reference 7.7. Transferring zone ownership You can transfer the ownership of zones from one project to another project. For example, the finance team might want to transfer the ownership of the wow.example.com. zone from their project to a project in the sales team. You can transfer ownership of zones without a cloud administrator's involvement. However, both the current project zone owner and a member of the receiving project must agree on the transfer. Prerequisites Your project must own the zone that you want to transfer. After you create the transfer request, a member of the receiving project must accept the zone that you are transferring. Procedure Source your credentials file. Example Obtain the ID for the project to which you want to transfer ownership of the zone. Example Sample output Using the project ID obtained in the step, create a zone transfer request for the zone that you want to transfer. Note When using a target project ID, no other project can accept the zone transfer. If you do not provide a target project ID, then any project that has the transfer request ID and its key can receive the zone transfer. Example To transfer the zone wow.example.com. to project 1d12e87fad0d437286c2873b36a12316 , you run: Sample output Obtain the zone transfer request ID and its key. Example Sample output Provide the zone transfer request ID and its key to a member of the receiving project. A member of the receiving project logs in to the receiving project, and sources his or her credentials file. Example Using the zone transfer request ID and its key, accept the zone transfer. Example Sample output Verification Using the zone transfer accept ID from the step, check the status of your zone transfer. Example In this example, the zone status accept ID is a4c4f872-c98c-411b-a787-58ed0e2dce11 . Sample output Additional resources zone transfer request create command in the Command line interface reference zone transfer accept request command in the Command line interface reference 7.8. Modifying zone transfer requests The first step of transferring the ownership of zones from one project to another project is to create a zone transfer request. If you need to change or delete the zone transfer request, you can do so. Prerequisites Your project must own the zone whose transfer request you are modifying. Procedure Source your credentials file. Example Obtain the ID for the zone transfer request you are modifying. Example Sample output Using the zone transfer request ID obtained in the step, you can update a limited set of fields on zone transfer requests, such as the description and target project ID. Example Sample output Using the zone transfer request ID obtained in step 2, you can cancel a pending zone transfer, by deleting its zone transfer request. Example There is no output from the zone transfer request delete command. Confirm that the zone transfer request is removed by running the zone transfer request list command. Additional resources Section 7.7, "Transferring zone ownership" zone transfer request set command in the Command line interface reference zone transfer request delete command in the Command line interface reference | [
"source ~/overcloudrc",
"openstack zone create --email [email protected] example.com.",
"+--------------------------------------+--------------+---------+------------+--------+--------+ | id | name | type | serial | status | action | +--------------------------------------+--------------+---------+------------+--------+--------+ | 14093115-0f0f-497a-ac69-42235e46c26f | example.com. | PRIMARY | 1468421656 | ACTIVE | NONE | +--------------------------------------+--------------+---------+------------+--------+--------+",
"source ~/overcloudrc",
"openstack zone set example.com. --ttl 3000",
"openstack zone show example.com.",
"source ~/overcloudrc",
"openstack zone delete example.com.",
"source ~/overcloudrc",
"openstack zone export create example.com.",
"+------------+--------------------------------------+ | Field | Value | +------------+--------------------------------------+ | created_at | 2022-02-11T02:01:30.000000 | | id | e75aef2c-b562-4cd9-a426-4a73f6cb82be | | location | None | | message | None | | project_id | cf5a8f5cc5834d2dacd1d54cd0a354b7 | | status | PENDING | | updated_at | None | | version | 1 | | zone_id | d8f81db6-937b-4388-bfb3-ba620e6c09fb | +------------+--------------------------------------+",
"openstack zone export show e75aef2c-b562-4cd9-a426-4a73f6cb82be",
"+------------+--------------------------------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------------------------------+ | created_at | 2022-02-11T02:01:30.000000 | | id | e75aef2c-b562-4cd9-a426-4a73f6cb82be | | location | designate://v2/zones/tasks/exports/e75aef2c-b562-4cd9-a426-4a73f6cb82be/export | | message | None | | project_id | cf5a8f5cc5834d2dacd1d54cd0a354b7 | | status | COMPLETE | | updated_at | 2022-02-11T02:01:30.000000 | | version | 2 | | zone_id | d8f81db6-937b-4388-bfb3-ba620e6c09fb | +------------+--------------------------------------------------------------------------------+",
"openstack zone export showfile e75aef2c-b562-4cd9-a426-4a73f6cb82be -f value",
"USDORIGIN example.com. USDTTL 3600 example.com. IN NS ns1.example.com. example.com. IN SOA ns1.example.com. admin.example.com. 1624414033 3583 600 86400 3600 www.example.com. IN A 192.0.2.2 www.example.com. IN A 192.0.2.1",
"source ~/overcloudrc",
"openstack zone list",
"openstack zone delete example.com.",
"openstack zone list",
"cat /home/stack/zone_file",
"USDORIGIN example.com. USDTTL 3000 example.com. IN NS test.example.com. example.com. IN SOA test.example.com. admin.example.com. 1624415706 9000 500 86000 5000 www.example.com. IN A 192.0.2.2 test.example.com. IN NS test.example.com.",
"openstack zone import create /home/stack/zone_file",
"openstack recordset list -c name -c type -c records -c status example.com.",
"+-------------------+------+---------------------------------------------------------------------+--------+ | name | type | records | status | +-------------------+------+---------------------------------------------------------------------+--------+ | example.com. | SOA | ns1.example.com. admin.example.com. 1624415706 3582 500 86000 3600 | ACTIVE | | test.example.com. | NS | test.example.com. | ACTIVE | | example.com. | NS | ns1.example.com. | ACTIVE | | www.example.com. | A | 192.0.2.2 | ACTIVE | +-------------------+------+---------------------------------------------------------------------+--------+",
"source ~/overcloudrc",
"openstack project list",
"+----------------------------------+--------------------+ | ID | Name | +----------------------------------+--------------------+ | 7af0acba0486472da2447ff55df4a26d | Finance | | 1d12e87fad0d437286c2873b36a12316 | Sales | +----------------------------------+--------------------+",
"openstack zone transfer request create --target-project-id 1d12e87fad0d437286c2873b36a12316 wow.example.com.",
"+-------------------+-----------------------------------------------------+ | Field | Value | +-------------------+-----------------------------------------------------+ | created_at | 2022-05-26T22:06:39.000000 | | description | None | | id | 63cab5e5-65fa-4480-b26c-c16c267c44b2 | | key | BIFJIQWH | | links | {'self': 'http://127.0.0.1:60053/v2/zones/tasks/tra | | | nsfer_requests/63cab5e5-65fa-4480-b26c-c16c267c44b2 | | | '} | | project_id | 6265985fc493465db6a978b318a01996 | | status | ACTIVE | | target_project_id | 1d12e87fad0d437286c2873b36a12316 | | updated_at | None | | zone_id | 962f08b4-b671-4096-bf24-8908c9d4af0c | | zone_name | wow.example.com. | +-------------------+-----------------------------------------------------+",
"openstack zone transfer request list -c id -c zone_name -c key",
"+--------------------------------------+------------------+----------+ | id | zone_name | key | +--------------------------------------+------------------+----------+ | 63cab5e5-65fa-4480-b26c-c16c267c44b2 | wow.example.com. | BIFJIQWH | +--------------------------------------+------------------+----------+",
"source ~/overcloudrc",
"openstack zone transfer accept request --transfer-id 63cab5e5-65fa-4480-b26c-c16c267c44b2 --key BIFJIQWH",
"+--------------------------+----------------------------------------------+ | Field | Value | +--------------------------+----------------------------------------------+ | created_at | 2022-05-27T21:37:43.000000 | | id | a4c4f872-c98c-411b-a787-58ed0e2dce11 | | key | BIFJIQWH | | links | {'self': 'http://127.0.0.1:60053/v2/zones/ta | | | sks/transfer_accepts/a4c4f872-c98c-411b-a787 | | | -58ed0e2dce11', 'zone': 'http://127.0.0.1:60 | | | 053/v2/zones/962f08b4-b671-4096-bf24-8908c9d | | | 4af0c'} | | project_id | 1d12e87fad0d437286c2873b36a12316 | | status | COMPLETE | | updated_at | 2022-05-27T21:37:43.000000 | | zone_id | 962f08b4-b671-4096-bf24-8908c9d4af0c | | zone_transfer_request_id | 63cab5e5-65fa-4480-b26c-c16c267c44b2 | +--------------------------+----------------------------------------------+",
"openstack zone transfer accept show a4c4f872-c98c-411b-a787-58ed0e2dce11",
"+--------------------------+----------------------------------------------+ | Field | Value | +--------------------------+----------------------------------------------+ | created_at | 2022-05-27T21:37:43.000000 | | id | a4c4f872-c98c-411b-a787-58ed0e2dce11 | | key | None | | links | {'self': 'http://127.0.0.1:60053/v2/zones/ta | | | sks/transfer_accepts/a4c4f872-c98c-411b-a787 | | | -58ed0e2dce11', 'zone': 'http://127.0.0.1:60 | | | 053/v2/zones/962f08b4-b671-4096-bf24-8908c9d | | | 4af0c'} | | project_id | 1d12e87fad0d437286c2873b36a12316 | | status | COMPLETE | | updated_at | 2022-05-27T21:37:43.000000 | | zone_id | 962f08b4-b671-4096-bf24-8908c9d4af0c | | zone_transfer_request_id | 63cab5e5-65fa-4480-b26c-c16c267c44b2 | +--------------------------+----------------------------------------------+",
"source ~/overcloudrc",
"openstack zone transfer request list -c id -c zone_name",
"+--------------------------------------+------------------+ | id | zone_name | +--------------------------------------+------------------+ | 63cab5e5-65fa-4480-b26c-c16c267c44b2 | wow.example.com. | +--------------------------------------+------------------+",
"openstack zone transfer request set --description \"wow zone transfer\" 63cab5e5-65fa-4480-b26c-c16c267c44b2",
"+-------------------+-----------------------------------------------------+ | Field | Value | +-------------------+-----------------------------------------------------+ | created_at | 2022-05-26T22:06:39.000000 | | description | wow zone transfer | | id | 63cab5e5-65fa-4480-b26c-c16c267c44b2 | | key | BIFJIQWH | | links | {'self': 'http://127.0.0.1:60053/v2/zones/tasks/tra | | | nsfer_requests/63cab5e5-65fa-4480-b26c-c16c267c44b2 | | | '} | | project_id | 6265985fc493465db6a978b318a01996 | | status | ACTIVE | | target_project_id | 1d12e87fad0d437286c2873b36a12316 | | updated_at | 2022-05-27T20:52:08.000000 | | zone_id | 962f08b4-b671-4096-bf24-8908c9d4af0c | | zone_name | wow.example.com. | +-------------------+-----------------------------------------------------+",
"openstack zone transfer request delete 63cab5e5-65fa-4480-b26c-c16c267c44b2"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_dns_as_a_service/manage-zones_rhosp-dnsaas |
8.3. Overview of Packet Reception | 8.3. Overview of Packet Reception To better analyze network bottlenecks and performance issues, you need to understand how packet reception works. Packet reception is important in network performance tuning because the receive path is where frames are often lost. Lost frames in the receive path can cause a significant penalty to network performance. Figure 8.1. Network receive path diagram The Linux kernel receives each frame and subjects it to a four-step process: Hardware Reception : the network interface card (NIC) receives the frame on the wire. Depending on its driver configuration, the NIC transfers the frame either to an internal hardware buffer memory or to a specified ring buffer. Hard IRQ : the NIC asserts the presence of a net frame by interrupting the CPU. This causes the NIC driver to acknowledge the interrupt and schedule the soft IRQ operation . Soft IRQ : this stage implements the actual frame-receiving process, and is run in softirq context. This means that the stage pre-empts all applications running on the specified CPU, but still allows hard IRQs to be asserted. In this context (running on the same CPU as hard IRQ, thereby minimizing locking overhead), the kernel actually removes the frame from the NIC hardware buffers and processes it through the network stack. From there, the frame is either forwarded, discarded, or passed to a target listening socket. When passed to a socket, the frame is appended to the application that owns the socket. This process is done iteratively until the NIC hardware buffer runs out of frames, or until the device weight ( dev_weight ). For more information about device weight, refer to Section 8.4.1, "NIC Hardware Buffer" Application receive : the application receives the frame and dequeues it from any owned sockets via the standard POSIX calls ( read , recv , recvfrom ). At this point, data received over the network no longer exists on the network stack. The Red Hat Enterprise Linux Network Performance Tuning Guide available on the Red Hat Customer Portal contains information on packet reception in the Linux kernel, and covers the following areas of NIC tuning: SoftIRQ misses (netdev budget), tuned tuning daemon, numad NUMA daemon, CPU power states, interrupt balancing, pause frames, interrupt coalescence, adapter queue ( netdev backlog), adapter RX and TX buffers, adapter TX queue, module parameters, adapter offloading, Jumbo Frames, TCP and UDP protocol tuning, and NUMA locality. CPU/cache affinity To maintain high throughput on the receive path, it is recommended that you keep the L2 cache hot . As described earlier, network buffers are received on the same CPU as the IRQ that signaled their presence. This means that buffer data will be on the L2 cache of that receiving CPU. To take advantage of this, place process affinity on applications expected to receive the most data on the NIC that shares the same core as the L2 cache. This will maximize the chances of a cache hit, and thereby improve performance. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-network-packet-reception |
Configuring and managing logical volumes | Configuring and managing logical volumes Red Hat Enterprise Linux 9 Configuring and managing LVM Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/index |
Chapter 4. Hot Rod Client API | Chapter 4. Hot Rod Client API Data Grid Hot Rod client API provides interfaces for creating caches remotely, manipulating data, monitoring the topology of clustered caches, and more. 4.1. RemoteCache API The collection methods keySet , entrySet and values are backed by the remote cache. That is that every method is called back into the RemoteCache . This is useful as it allows for the various keys, entries or values to be retrieved lazily, and not requiring them all be stored in the client memory at once if the user does not want. These collections adhere to the Map specification being that add and addAll are not supported but all other methods are supported. One thing to note is the Iterator.remove and Set.remove or Collection.remove methods require more than 1 round trip to the server to operate. You can check out the RemoteCache Javadoc to see more details about these and the other methods. Iterator Usage The iterator method of these collections uses retrieveEntries internally, which is described below. If you notice retrieveEntries takes an argument for the batch size. There is no way to provide this to the iterator. As such the batch size can be configured via system property infinispan.client.hotrod.batch_size or through the ConfigurationBuilder when configuring the RemoteCacheManager . Also the retrieveEntries iterator returned is Closeable as such the iterators from keySet , entrySet and values return an AutoCloseable variant. Therefore you should always close these `Iterator`s when you are done with them. try (CloseableIterator<Map.Entry<K, V>> iterator = remoteCache.entrySet().iterator()) { } What if I want a deep copy and not a backing collection? version of RemoteCache allowed for the retrieval of a deep copy of the keySet . This is still possible with the new backing map, you just have to copy the contents yourself. Also you can do this with entrySet and values , which we didn't support before. Set<K> keysCopy = remoteCache.keySet().stream().collect(Collectors.toSet()); 4.1.1. Unsupported Methods The Data Grid RemoteCache API does not support all methods available in the Cache API and throws UnsupportedOperationException when unsupported methods are invoked. Most of these methods do not make sense on the remote cache (e.g. listener management operations), or correspond to methods that are not supported by local cache as well (e.g. containsValue). Certain atomic operations inherited from ConcurrentMap are also not supported with the RemoteCache API, for example: boolean remove(Object key, Object value); boolean replace(Object key, Object value); boolean replace(Object key, Object oldValue, Object value); However, RemoteCache offers alternative versioned methods for these atomic operations that send version identifiers over the network instead of whole value objects. Reference Cache RemoteCache UnsupportedOperationException ConcurrentMap 4.2. Remote Iterator API Data Grid provides a remote iterator API to retrieve entries where memory resources are constrained or if you plan to do server-side filtering or conversion. // Retrieve all entries in batches of 1000 int batchSize = 1000; try (CloseableIterator<Entry<Object, Object>> iterator = remoteCache.retrieveEntries(null, batchSize)) { while(iterator.hasNext()) { // Do something } } // Filter by segment Set<Integer> segments = ... try (CloseableIterator<Entry<Object, Object>> iterator = remoteCache.retrieveEntries(null, segments, batchSize)) { while(iterator.hasNext()) { // Do something } } // Filter by custom filter try (CloseableIterator<Entry<Object, Object>> iterator = remoteCache.retrieveEntries("myFilterConverterFactory", segments, batchSize)) { while(iterator.hasNext()) { // Do something } } 4.2.1. Deploying Custom Filters to Data Grid Server Deploy custom filters to Data Grid server instances. Procedure Create a factory that extends KeyValueFilterConverterFactory . import java.io.Serializable; import org.infinispan.filter.AbstractKeyValueFilterConverter; import org.infinispan.filter.KeyValueFilterConverter; import org.infinispan.filter.KeyValueFilterConverterFactory; import org.infinispan.filter.NamedFactory; import org.infinispan.metadata.Metadata; //@NamedFactory annotation defines the factory name @NamedFactory(name = "myFilterConverterFactory") public class MyKeyValueFilterConverterFactory implements KeyValueFilterConverterFactory { @Override public KeyValueFilterConverter<String, SampleEntity1, SampleEntity2> getFilterConverter() { return new MyKeyValueFilterConverter(); } // Filter implementation. Should be serializable or externalizable for DIST caches static class MyKeyValueFilterConverter extends AbstractKeyValueFilterConverter<String, SampleEntity1, SampleEntity2> implements Serializable { @Override public SampleEntity2 filterAndConvert(String key, SampleEntity1 entity, Metadata metadata) { // returning null will case the entry to be filtered out // return SampleEntity2 will convert from the cache type SampleEntity1 } @Override public MediaType format() { // returns the MediaType that data should be presented to this converter. // When omitted, the server will use "application/x-java-object". // Returning null will cause the filter/converter to be done in the storage format. } } } Create a JAR that contains a META-INF/services/org.infinispan.filter.KeyValueFilterConverterFactory file. This file should include the fully qualified class name of the filter factory class implementation. If the filter uses custom key/value classes, you must include them in your JAR file so that the filter can correctly unmarshall key and/or value instances. Add the JAR file to the server/lib directory of your Data Grid server installation directory. Reference KeyValueFilterConverterFactory 4.3. MetadataValue API Use the MetadataValue interface for versioned operations. The following example shows a remove operation that occurs only if the version of the value for the entry is unchanged: RemoteCacheManager remoteCacheManager = new RemoteCacheManager(); RemoteCache<String, String> remoteCache = remoteCacheManager.getCache(); remoteCache.put("car", "ferrari"); VersionedValue valueBinary = remoteCache.getWithMetadata("car"); assert remoteCache.remove("car", valueBinary.getVersion()); assert !remoteCache.containsKey("car"); Reference org.infinispan.client.hotrod.MetadataValue 4.4. Streaming API Data Grid provides a Streaming API that implements methods that return instances of InputStream and OutputStream so you can stream large objects between Hot Rod clients and Data Grid servers. Consider the following example of a large object: StreamingRemoteCache<String> streamingCache = remoteCache.streaming(); OutputStream os = streamingCache.put("a_large_object"); os.write(...); os.close(); You could read the object through streaming as follows: StreamingRemoteCache<String> streamingCache = remoteCache.streaming(); InputStream is = streamingCache.get("a_large_object"); for(int b = is.read(); b >= 0; b = is.read()) { // iterate } is.close(); Note The Streaming API does not marshall values, which means you cannot access the same entries using both the Streaming and Non-Streaming API at the same time. You can, however, implement a custom marshaller to handle this case. The InputStream returned by the RemoteStreamingCache.get(K key) method implements the VersionedMetadata interface, so you can retrieve version and expiration information as follows: StreamingRemoteCache<String> streamingCache = remoteCache.streaming(); InputStream is = streamingCache.get("a_large_object"); long version = ((VersionedMetadata) is).getVersion(); for(int b = is.read(); b >= 0; b = is.read()) { // iterate } is.close(); Note Conditional write methods ( putIfAbsent() , replace() ) perform the actual condition check after the value is completely sent to the server. In other words, when the close() method is invoked on the OutputStream . Reference org.iinfinispan.client.hotrod.StreamingRemoteCache 4.5. Counter API The CounterManager interface is the entry point to define, retrieve and remove counters. Hot Rod clients can retrieve the CounterManager interface as in the following example: // create or obtain your RemoteCacheManager RemoteCacheManager manager = ...; // retrieve the CounterManager CounterManager counterManager = RemoteCounterManagerFactory.asCounterManager(manager); 4.6. Creating Event Listeners Java Hot Rod clients can register listeners to receive cache-entry level events. Cache entry created, modified and removed events are supported. Creating a client listener is very similar to embedded listeners, except that different annotations and event classes are used. Here's an example of a client listener that prints out each event received: import org.infinispan.client.hotrod.annotation.*; import org.infinispan.client.hotrod.event.*; @ClientListener(converterFactoryName = "static-converter") public class EventPrintListener { @ClientCacheEntryCreated public void handleCreatedEvent(ClientCacheEntryCreatedEvent e) { System.out.println(e); } @ClientCacheEntryModified public void handleModifiedEvent(ClientCacheEntryModifiedEvent e) { System.out.println(e); } @ClientCacheEntryRemoved public void handleRemovedEvent(ClientCacheEntryRemovedEvent e) { System.out.println(e); } } ClientCacheEntryCreatedEvent and ClientCacheEntryModifiedEvent instances provide information on the affected key, and the version of the entry. This version can be used to invoke conditional operations on the server, such as replaceWithVersion or removeWithVersion . ClientCacheEntryRemovedEvent events are only sent when the remove operation succeeds. In other words, if a remove operation is invoked but no entry is found or no entry should be removed, no event is generated. Users interested in removed events, even when no entry was removed, can develop event customization logic to generate such events. More information can be found in the customizing client events section . All ClientCacheEntryCreatedEvent , ClientCacheEntryModifiedEvent and ClientCacheEntryRemovedEvent event instances also provide a boolean isCommandRetried() method that will return true if the write command that caused this had to be retried again due to a topology change. This could be a sign that this event has been duplicated or another event was dropped and replaced (eg: ClientCacheEntryModifiedEvent replaced ClientCacheEntryCreatedEvent). Once the client listener implementation has been created, it needs to be registered with the server. To do so, execute: RemoteCache<?, ?> cache = ... cache.addClientListener(new EventPrintListener()); 4.6.1. Removing Event Listeners When an client event listener is not needed any more, it can be removed: EventPrintListener listener = ... cache.removeClientListener(listener); 4.6.2. Filtering Events In order to avoid inundating clients with events, users can provide filtering functionality to limit the number of events fired by the server for a particular client listener. To enable filtering, a cache event filter factory needs to be created that produces filter instances: import org.infinispan.notifications.cachelistener.filter.CacheEventFilterFactory; import org.infinispan.filter.NamedFactory; @NamedFactory(name = "static-filter") public static class StaticCacheEventFilterFactory implements CacheEventFilterFactory { @Override public StaticCacheEventFilter getFilter(Object[] params) { return new StaticCacheEventFilter(); } } // Serializable, Externalizable or marshallable with Infinispan Externalizers // needed when running in a cluster class StaticCacheEventFilter implements CacheEventFilter<Integer, String>, Serializable { @Override public boolean accept(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { if (key.equals(1)) // static key return true; return false; } } The cache event filter factory instance defined above creates filter instances which statically filter out all entries except the one whose key is 1 . To be able to register a listener with this cache event filter factory, the factory has to be given a unique name, and the Hot Rod server needs to be plugged with the name and the cache event filter factory instance. Create a JAR file that contains the filter implementation. If the cache uses custom key/value classes, these must be included in the JAR so that the callbacks can be executed with the correctly unmarshalled key and/or value instances. If the client listener has useRawData enabled, this is not necessary since the callback key/value instances will be provided in binary format. Create a META-INF/services/org.infinispan.notifications.cachelistener.filter.CacheEventFilterFactory file within the JAR file and within it, write the fully qualified class name of the filter class implementation. Add the JAR file to the server/lib directory of your Data Grid server installation directory. Link the client listener with this cache event filter factory by adding the factory name to the @ClientListener annotation: @ClientListener(filterFactoryName = "static-filter") public class EventPrintListener { ... } Register the listener with the server: RemoteCache<?, ?> cache = ... cache.addClientListener(new EventPrintListener()); You can also register dynamic filter instances that filter based on parameters provided when the listener is registered are also possible. Filters use the parameters received by the filter factories to enable this option, for example: import org.infinispan.notifications.cachelistener.filter.CacheEventFilterFactory; import org.infinispan.notifications.cachelistener.filter.CacheEventFilter; class DynamicCacheEventFilterFactory implements CacheEventFilterFactory { @Override public CacheEventFilter<Integer, String> getFilter(Object[] params) { return new DynamicCacheEventFilter(params); } } // Serializable, Externalizable or marshallable with Infinispan Externalizers // needed when running in a cluster class DynamicCacheEventFilter implements CacheEventFilter<Integer, String>, Serializable { final Object[] params; DynamicCacheEventFilter(Object[] params) { this.params = params; } @Override public boolean accept(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { if (key.equals(params[0])) // dynamic key return true; return false; } } The dynamic parameters required to do the filtering are provided when the listener is registered: RemoteCache<?, ?> cache = ... cache.addClientListener(new EventPrintListener(), new Object[]{1}, null); Warning Filter instances have to marshallable when they are deployed in a cluster so that the filtering can happen right where the event is generated, even if the even is generated in a different node to where the listener is registered. To make them marshallable, either make them extend Serializable , Externalizable , or provide a custom Externalizer for them. 4.6.3. Skipping Notifications Include the SKIP_LISTENER_NOTIFICATION flag when calling remote API methods to perform operations without getting event notifications from the server. For example, to prevent listener notifications when creating or modifying values, set the flag as follows: remoteCache.withFlags(Flag.SKIP_LISTENER_NOTIFICATION).put(1, "one"); 4.6.4. Customizing Events The events generated by default contain just enough information to make the event relevant but they avoid cramming too much information in order to reduce the cost of sending them. Optionally, the information shipped in the events can be customised in order to contain more information, such as values, or to contain even less information. This customization is done with CacheEventConverter instances generated by a CacheEventConverterFactory : import org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory; import org.infinispan.notifications.cachelistener.filter.CacheEventConverter; import org.infinispan.filter.NamedFactory; @NamedFactory(name = "static-converter") class StaticConverterFactory implements CacheEventConverterFactory { final CacheEventConverter<Integer, String, CustomEvent> staticConverter = new StaticCacheEventConverter(); public CacheEventConverter<Integer, String, CustomEvent> getConverter(final Object[] params) { return staticConverter; } } // Serializable, Externalizable or marshallable with Infinispan Externalizers // needed when running in a cluster class StaticCacheEventConverter implements CacheEventConverter<Integer, String, CustomEvent>, Serializable { public CustomEvent convert(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { return new CustomEvent(key, newValue); } } // Needs to be Serializable, Externalizable or marshallable with Infinispan Externalizers // regardless of cluster or local caches static class CustomEvent implements Serializable { final Integer key; final String value; CustomEvent(Integer key, String value) { this.key = key; this.value = value; } } In the example above, the converter generates a new custom event which includes the value as well as the key in the event. This will result in bigger event payloads compared with default events, but if combined with filtering, it can reduce its network bandwidth cost. Warning The target type of the converter must be either Serializable or Externalizable . In this particular case of converters, providing an Externalizer will not work by default since the default Hot Rod client marshaller does not support them. Handling custom events requires a slightly different client listener implementation to the one demonstrated previously. To be more precise, it needs to handle ClientCacheEntryCustomEvent instances: import org.infinispan.client.hotrod.annotation.*; import org.infinispan.client.hotrod.event.*; @ClientListener public class CustomEventPrintListener { @ClientCacheEntryCreated @ClientCacheEntryModified @ClientCacheEntryRemoved public void handleCustomEvent(ClientCacheEntryCustomEvent<CustomEvent> e) { System.out.println(e); } } The ClientCacheEntryCustomEvent received in the callback exposes the custom event via getEventData method, and the getType method provides information on whether the event generated was as a result of cache entry creation, modification or removal. Similar to filtering, to be able to register a listener with this converter factory, the factory has to be given a unique name, and the Hot Rod server needs to be plugged with the name and the cache event converter factory instance. Create a JAR file with the converter implementation within it. If the cache uses custom key/value classes, these must be included in the JAR so that the callbacks can be executed with the correctly unmarshalled key and/or value instances. If the client listener has useRawData enabled, this is not necessary since the callback key/value instances will be provided in binary format. Create a META-INF/services/org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory file within the JAR file and within it, write the fully qualified class name of the converter class implementation. Add the JAR file to the server/lib directory of your Data Grid server installation directory. Link the client listener with this converter factory by adding the factory name to the @ClientListener annotation: @ClientListener(converterFactoryName = "static-converter") public class CustomEventPrintListener { ... } Register the listener with the server: RemoteCache<?, ?> cache = ... cache.addClientListener(new CustomEventPrintListener()); Dynamic converter instances that convert based on parameters provided when the listener is registered are also possible. Converters use the parameters received by the converter factories to enable this option. For example: import org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory; import org.infinispan.notifications.cachelistener.filter.CacheEventConverter; @NamedFactory(name = "dynamic-converter") class DynamicCacheEventConverterFactory implements CacheEventConverterFactory { public CacheEventConverter<Integer, String, CustomEvent> getConverter(final Object[] params) { return new DynamicCacheEventConverter(params); } } // Serializable, Externalizable or marshallable with Infinispan Externalizers needed when running in a cluster class DynamicCacheEventConverter implements CacheEventConverter<Integer, String, CustomEvent>, Serializable { final Object[] params; DynamicCacheEventConverter(Object[] params) { this.params = params; } public CustomEvent convert(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { // If the key matches a key given via parameter, only send the key information if (params[0].equals(key)) return new CustomEvent(key, null); return new CustomEvent(key, newValue); } } The dynamic parameters required to do the conversion are provided when the listener is registered: RemoteCache<?, ?> cache = ... cache.addClientListener(new EventPrintListener(), null, new Object[]{1}); Warning Converter instances have to marshallable when they are deployed in a cluster, so that the conversion can happen right where the event is generated, even if the event is generated in a different node to where the listener is registered. To make them marshallable, either make them extend Serializable , Externalizable , or provide a custom Externalizer for them. 4.6.5. Filter and Custom Events If you want to do both event filtering and customization, it's easier to implement org.infinispan.notifications.cachelistener.filter.CacheEventFilterConverter which allows both filter and customization to happen in a single step. For convenience, it's recommended to extend org.infinispan.notifications.cachelistener.filter.AbstractCacheEventFilterConverter instead of implementing org.infinispan.notifications.cachelistener.filter.CacheEventFilterConverter directly. For example: import org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory; import org.infinispan.notifications.cachelistener.filter.CacheEventConverter; @NamedFactory(name = "dynamic-filter-converter") class DynamicCacheEventFilterConverterFactory implements CacheEventFilterConverterFactory { public CacheEventFilterConverter<Integer, String, CustomEvent> getFilterConverter(final Object[] params) { return new DynamicCacheEventFilterConverter(params); } } // Serializable, Externalizable or marshallable with Infinispan Externalizers needed when running in a cluster // class DynamicCacheEventFilterConverter extends AbstractCacheEventFilterConverter<Integer, String, CustomEvent>, Serializable { final Object[] params; DynamicCacheEventFilterConverter(Object[] params) { this.params = params; } public CustomEvent filterAndConvert(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { // If the key matches a key given via parameter, only send the key information if (params[0].equals(key)) return new CustomEvent(key, null); return new CustomEvent(key, newValue); } } Similar to filters and converters, to be able to register a listener with this combined filter/converter factory, the factory has to be given a unique name via the @NamedFactory annotation, and the Hot Rod server needs to be plugged with the name and the cache event converter factory instance. Create a JAR file with the converter implementation within it. If the cache uses custom key/value classes, these must be included in the JAR so that the callbacks can be executed with the correctly unmarshalled key and/or value instances. If the client listener has useRawData enabled, this is not necessary since the callback key/value instances will be provided in binary format. Create a META-INF/services/org.infinispan.notifications.cachelistener.filter.CacheEventFilterConverterFactory file within the JAR file and within it, write the fully qualified class name of the converter class implementation. Add the JAR file to the server/lib directory of your Data Grid server installation directory. From a client perspective, to be able to use the combined filter and converter class, the client listener must define the same filter factory and converter factory names, e.g.: @ClientListener(filterFactoryName = "dynamic-filter-converter", converterFactoryName = "dynamic-filter-converter") public class CustomEventPrintListener { ... } The dynamic parameters required in the example above are provided when the listener is registered via either filter or converter parameters. If filter parameters are non-empty, those are used, otherwise, the converter parameters: RemoteCache<?, ?> cache = ... cache.addClientListener(new CustomEventPrintListener(), new Object[]{1}, null); 4.6.6. Event Marshalling Hot Rod servers can store data in different formats, but in spite of that, Java Hot Rod client users can still develop CacheEventConverter or CacheEventFilter instances that work on typed objects. By default, filters and converter will use data as POJO (application/x-java-object) but it is possible to override the desired format by overriding the method format() from the filter/converter. If the format returns null , the filter/converter will receive data as it's stored. Hot Rod Java clients can be configured to use different org.infinispan.commons.marshall.Marshaller instances. If doing this and deploying CacheEventConverter or CacheEventFilter instances, to be able to present filters/converter with Java Objects rather than marshalled content, the server needs to be able to convert between objects and the binary format produced by the marshaller. To deploy a Marshaller instance server-side, follow a similar method to the one used to deploy CacheEventConverter or CacheEventFilter instances: Create a JAR file with the converter implementation within it. Create a META-INF/services/org.infinispan.commons.marshall.Marshaller file within the JAR file and within it, write the fully qualified class name of the marshaller class implementation. Add the JAR file to the server/lib directory of your Data Grid server installation directory. Note that the Marshaller could be deployed in either a separate jar, or in the same jar as the CacheEventConverter and/or CacheEventFilter instances. 4.6.6.1. Deploying Protostream Marshallers If a cache stores Protobuf content, as it happens when using ProtoStream marshaller in the Hot Rod client, it's not necessary to deploy a custom marshaller since the format is already support by the server: there are transcoders from Protobuf format to most common formats like JSON and POJO. When using filters/converters with those caches, and it's desirable to use filter/converters with Java Objects rather binary Protobuf data, it's necessary to configure the extra ProtoStream marshallers so that the server can unmarshall the data before filtering/converting. To do so, you must configure the required SerializationContextInitializer(s) as part of the Data Grid server configuration. See Cache Encoding and Marshalling for more information. 4.6.7. Listener State Handling Client listener annotation has an optional includeCurrentState attribute that specifies whether state will be sent to the client when the listener is added or when there's a failover of the listener. By default, includeCurrentState is false, but if set to true and a client listener is added in a cache already containing data, the server iterates over the cache contents and sends an event for each entry to the client as a ClientCacheEntryCreated (or custom event if configured). This allows clients to build some local data structures based on the existing content. Once the content has been iterated over, events are received as normal, as cache updates are received. If the cache is clustered, the entire cluster wide contents are iterated over. 4.6.8. Listener Failure Handling When a Hot Rod client registers a client listener, it does so in a single node in a cluster. If that node fails, the Java Hot Rod client detects that transparently and fails over all listeners registered in the node that failed to another node. During this fail over the client might miss some events. To avoid missing these events, the client listener annotation contains an optional parameter called includeCurrentState which if set to true, when the failover happens, the cache contents can iterated over and ClientCacheEntryCreated events (or custom events if configured) are generated. By default, includeCurrentState is set to false. Use callbacks to handle failover events: @ClientCacheFailover public void handleFailover(ClientCacheFailoverEvent e) { ... } This is very useful in use cases where the client has cached some data, and as a result of the fail over, taking in account that some events could be missed, it could decide to clear any locally cached data when the fail over event is received, with the knowledge that after the fail over event, it will receive events for the contents of the entire cache. 4.7. Hot Rod Java Client Transactions You can configure and use Hot Rod clients in JTA Transaction s. To participate in a transaction, the Hot Rod client requires the TransactionManager with which it interacts and whether it participates in the transaction through the Synchronization or XAResource interface. Important Transactions are optimistic in that clients acquire write locks on entries during the prepare phase. To avoid data inconsistency, be sure to read about Detecting Conflicts with Transactions . 4.7.1. Configuring the Server Caches in the server must also be transactional for clients to participate in JTA Transaction s. The following server configuration is required, otherwise transactions rollback only: Isolation level must be REPEATABLE_READ . PESSIMISTIC locking mode is recommended but OPTIMISTIC can be used. Transaction mode should be NON_XA or NON_DURABLE_XA . Hot Rod transactions should not use FULL_XA because it degrades performance. For example: <replicated-cache name="hotrodReplTx"> <locking isolation="REPEATABLE_READ"/> <transaction mode="NON_XA" locking="PESSIMISTIC"/> </replicated-cache> Hot Rod transactions have their own recovery mechanism. 4.7.2. Configuring Hot Rod Clients Transactional RemoteCache are configured per-cache basis. The exception is the transaction's timeout which is global, because a single transaction can interact with multiple RemoteCache s. Note Embedded Data Grid supports pessimistic locks but Hot Rod clients do not. Therefore, the transaction result obtained from using pessimistic locks in Data Grid server might differ from the result obtained from Hot Rod client. The following example shows how to configure a transactional RemoteCache for cache my-cache : org.infinispan.client.hotrod.configuration.ConfigurationBuilder cb = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder(); //other client configuration parameters cb.transactionTimeout(1, TimeUnit.MINUTES); cb.remoteCache("my-cache") .transactionManagerLookup(GenericTransactionManagerLookup.getInstance()) .transactionMode(TransactionMode.NON_XA); See ConfigurationBuilder and RemoteCacheConfigurationBuilder Javadoc for documentation on configuration parameters. You can also configure the Java Hot Rod client with a properties file, as in the following example: 4.7.2.1. TransactionManagerLookup Interface TransactionManagerLookup provides an entry point to fetch a TransactionManager . Available implementations of TransactionManagerLookup : GenericTransactionManagerLookup A lookup class that locates TransactionManager s running in Java EE application servers. Defaults to the RemoteTransactionManager if it cannot find a TransactionManager . This is the default for Hot Rod Java clients. Tip In most cases, GenericTransactionManagerLookup is suitable. However, you can implement the TransactionManagerLookup interface if you need to integrate a custom TransactionManager . RemoteTransactionManagerLookup A basic, and volatile, TransactionManager if no other implementation is available. Note that this implementation has significant limitations when handling concurrent transactions and recovery. 4.7.3. Transaction Modes TransactionMode controls how a RemoteCache interacts with the TransactionManager . Important Configure transaction modes on both the Data Grid server and your client application. If clients attempt to perform transactional operations on non-transactional caches, runtime exceptions can occur. Transaction modes are the same in both the Data Grid configuration and client settings. Use the following modes with your client, see the Data Grid configuration schema for the server: NONE The RemoteCache does not interact with the TransactionManager . This is the default mode and is non-transactional. NON_XA The RemoteCache interacts with the TransactionManager via Synchronization . NON_DURABLE_XA The RemoteCache interacts with the TransactionManager via XAResource . Recovery capabilities are disabled. FULL_XA The RemoteCache interacts with the TransactionManager via XAResource . Recovery capabilities are enabled. Invoke the XaResource.recover() method to retrieve transactions to recover. 4.7.4. Detecting Conflicts with Transactions Transactions use the initial values of keys to detect conflicts. For example, "k" has a value of "v" when a transaction begins. During the prepare phase, the transaction fetches "k" from the server to read the value. If the value has changed, the transaction rolls back to avoid a conflict. Note Transactions use versions to detect changes instead of checking value equality. The forceReturnValue parameter controls write operations to the RemoteCache and helps avoid conflicts. It has the following values: If true , the TransactionManager fetches the most recent value from the server before performing write operations. However, the forceReturnValue parameter applies only to write operations that access the key for the first time. If false , the TransactionManager does not fetch the most recent value from the server before performing write operations. Note This parameter does not affect conditional write operations such as replace or putIfAbsent because they require the most recent value. The following transactions provide an example where the forceReturnValue parameter can prevent conflicting write operations: Transaction 1 (TX1) RemoteCache<String, String> cache = ... TransactionManager tm = ... tm.begin(); cache.put("k", "v1"); tm.commit(); Transaction 2 (TX2) RemoteCache<String, String> cache = ... TransactionManager tm = ... tm.begin(); cache.put("k", "v2"); tm.commit(); In this example, TX1 and TX2 are executed in parallel. The initial value of "k" is "v". If forceReturnValue = true , the cache.put() operation fetches the value for "k" from the server in both TX1 and TX2. The transaction that acquires the lock for "k" first then commits. The other transaction rolls back during the commit phase because the transaction can detect that "k" has a value other than "v". If forceReturnValue = false , the cache.put() operation does not fetch the value for "k" from the server and returns null. Both TX1 and TX2 can successfully commit, which results in a conflict. This occurs because neither transaction can detect that the initial value of "k" changed. The following transactions include cache.get() operations to read the value for "k" before doing the cache.put() operations: Transaction 1 (TX1) RemoteCache<String, String> cache = ... TransactionManager tm = ... tm.begin(); cache.get("k"); cache.put("k", "v1"); tm.commit(); Transaction 2 (TX2) RemoteCache<String, String> cache = ... TransactionManager tm = ... tm.begin(); cache.get("k"); cache.put("k", "v2"); tm.commit(); In the preceding examples, TX1 and TX2 both read the key so the forceReturnValue parameter does not take effect. One transaction commits, the other rolls back. However, the cache.get() operation requires an additional server request. If you do not need the return value for the cache.put() operation that server request is inefficient. 4.7.5. Using the Configured Transaction Manager and Transaction Mode The following example shows how to use the TransactionManager and TransactionMode that you configure in the RemoteCacheManager : //Configure the transaction manager and transaction mode. org.infinispan.client.hotrod.configuration.ConfigurationBuilder cb = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder(); cb.remoteCache("my-cache") .transactionManagerLookup(RemoteTransactionManagerLookup.getInstance()) .transactionMode(TransactionMode.NON_XA); RemoteCacheManager rcm = new RemoteCacheManager(cb.build()); //The my-cache instance uses the RemoteCacheManager configuration. RemoteCache<String, String> cache = rcm.getCache("my-cache"); //Return the transaction manager that the cache uses. TransactionManager tm = cache.getTransactionManager(); //Perform a simple transaction. tm.begin(); cache.put("k1", "v1"); System.out.println("K1 value is " + cache.get("k1")); tm.commit(); 4.8. Counter API The MultimapCacheManager interface is the entry point to get a RemoteMultimapCache . Hot Rod clients can retrieve the MultimapCacheManager interface as in the following example: // create or obtain your RemoteCacheManager RemoteCacheManager manager = ...; // retrieve the MultimapCacheManager MultimapCacheManager multimapCacheManager = RemoteMultimapCacheManagerFactory.from(manager); // retrieve the RemoteMultimapCache RemoteMultimapCache<Integer, String> people = multimapCacheManager.get("people"); // add key - values people.put("coders", "Will"); people.put("coders", "Auri"); people.put("coders", "Pedro"); // retrieve single key with multiple values Collection<String> coders = people.get("coders").join(); | [
"try (CloseableIterator<Map.Entry<K, V>> iterator = remoteCache.entrySet().iterator()) { }",
"Set<K> keysCopy = remoteCache.keySet().stream().collect(Collectors.toSet());",
"boolean remove(Object key, Object value); boolean replace(Object key, Object value); boolean replace(Object key, Object oldValue, Object value);",
"// Retrieve all entries in batches of 1000 int batchSize = 1000; try (CloseableIterator<Entry<Object, Object>> iterator = remoteCache.retrieveEntries(null, batchSize)) { while(iterator.hasNext()) { // Do something } } // Filter by segment Set<Integer> segments = try (CloseableIterator<Entry<Object, Object>> iterator = remoteCache.retrieveEntries(null, segments, batchSize)) { while(iterator.hasNext()) { // Do something } } // Filter by custom filter try (CloseableIterator<Entry<Object, Object>> iterator = remoteCache.retrieveEntries(\"myFilterConverterFactory\", segments, batchSize)) { while(iterator.hasNext()) { // Do something } }",
"import java.io.Serializable; import org.infinispan.filter.AbstractKeyValueFilterConverter; import org.infinispan.filter.KeyValueFilterConverter; import org.infinispan.filter.KeyValueFilterConverterFactory; import org.infinispan.filter.NamedFactory; import org.infinispan.metadata.Metadata; //@NamedFactory annotation defines the factory name @NamedFactory(name = \"myFilterConverterFactory\") public class MyKeyValueFilterConverterFactory implements KeyValueFilterConverterFactory { @Override public KeyValueFilterConverter<String, SampleEntity1, SampleEntity2> getFilterConverter() { return new MyKeyValueFilterConverter(); } // Filter implementation. Should be serializable or externalizable for DIST caches static class MyKeyValueFilterConverter extends AbstractKeyValueFilterConverter<String, SampleEntity1, SampleEntity2> implements Serializable { @Override public SampleEntity2 filterAndConvert(String key, SampleEntity1 entity, Metadata metadata) { // returning null will case the entry to be filtered out // return SampleEntity2 will convert from the cache type SampleEntity1 } @Override public MediaType format() { // returns the MediaType that data should be presented to this converter. // When omitted, the server will use \"application/x-java-object\". // Returning null will cause the filter/converter to be done in the storage format. } } }",
"RemoteCacheManager remoteCacheManager = new RemoteCacheManager(); RemoteCache<String, String> remoteCache = remoteCacheManager.getCache(); remoteCache.put(\"car\", \"ferrari\"); VersionedValue valueBinary = remoteCache.getWithMetadata(\"car\"); assert remoteCache.remove(\"car\", valueBinary.getVersion()); assert !remoteCache.containsKey(\"car\");",
"StreamingRemoteCache<String> streamingCache = remoteCache.streaming(); OutputStream os = streamingCache.put(\"a_large_object\"); os.write(...); os.close();",
"StreamingRemoteCache<String> streamingCache = remoteCache.streaming(); InputStream is = streamingCache.get(\"a_large_object\"); for(int b = is.read(); b >= 0; b = is.read()) { // iterate } is.close();",
"StreamingRemoteCache<String> streamingCache = remoteCache.streaming(); InputStream is = streamingCache.get(\"a_large_object\"); long version = ((VersionedMetadata) is).getVersion(); for(int b = is.read(); b >= 0; b = is.read()) { // iterate } is.close();",
"// create or obtain your RemoteCacheManager RemoteCacheManager manager = ...; // retrieve the CounterManager CounterManager counterManager = RemoteCounterManagerFactory.asCounterManager(manager);",
"import org.infinispan.client.hotrod.annotation.*; import org.infinispan.client.hotrod.event.*; @ClientListener(converterFactoryName = \"static-converter\") public class EventPrintListener { @ClientCacheEntryCreated public void handleCreatedEvent(ClientCacheEntryCreatedEvent e) { System.out.println(e); } @ClientCacheEntryModified public void handleModifiedEvent(ClientCacheEntryModifiedEvent e) { System.out.println(e); } @ClientCacheEntryRemoved public void handleRemovedEvent(ClientCacheEntryRemovedEvent e) { System.out.println(e); } }",
"RemoteCache<?, ?> cache = cache.addClientListener(new EventPrintListener());",
"EventPrintListener listener = cache.removeClientListener(listener);",
"import org.infinispan.notifications.cachelistener.filter.CacheEventFilterFactory; import org.infinispan.filter.NamedFactory; @NamedFactory(name = \"static-filter\") public static class StaticCacheEventFilterFactory implements CacheEventFilterFactory { @Override public StaticCacheEventFilter getFilter(Object[] params) { return new StaticCacheEventFilter(); } } // Serializable, Externalizable or marshallable with Infinispan Externalizers // needed when running in a cluster class StaticCacheEventFilter implements CacheEventFilter<Integer, String>, Serializable { @Override public boolean accept(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { if (key.equals(1)) // static key return true; return false; } }",
"@ClientListener(filterFactoryName = \"static-filter\") public class EventPrintListener { ... }",
"RemoteCache<?, ?> cache = cache.addClientListener(new EventPrintListener());",
"import org.infinispan.notifications.cachelistener.filter.CacheEventFilterFactory; import org.infinispan.notifications.cachelistener.filter.CacheEventFilter; class DynamicCacheEventFilterFactory implements CacheEventFilterFactory { @Override public CacheEventFilter<Integer, String> getFilter(Object[] params) { return new DynamicCacheEventFilter(params); } } // Serializable, Externalizable or marshallable with Infinispan Externalizers // needed when running in a cluster class DynamicCacheEventFilter implements CacheEventFilter<Integer, String>, Serializable { final Object[] params; DynamicCacheEventFilter(Object[] params) { this.params = params; } @Override public boolean accept(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { if (key.equals(params[0])) // dynamic key return true; return false; } }",
"RemoteCache<?, ?> cache = cache.addClientListener(new EventPrintListener(), new Object[]{1}, null);",
"remoteCache.withFlags(Flag.SKIP_LISTENER_NOTIFICATION).put(1, \"one\");",
"import org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory; import org.infinispan.notifications.cachelistener.filter.CacheEventConverter; import org.infinispan.filter.NamedFactory; @NamedFactory(name = \"static-converter\") class StaticConverterFactory implements CacheEventConverterFactory { final CacheEventConverter<Integer, String, CustomEvent> staticConverter = new StaticCacheEventConverter(); public CacheEventConverter<Integer, String, CustomEvent> getConverter(final Object[] params) { return staticConverter; } } // Serializable, Externalizable or marshallable with Infinispan Externalizers // needed when running in a cluster class StaticCacheEventConverter implements CacheEventConverter<Integer, String, CustomEvent>, Serializable { public CustomEvent convert(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { return new CustomEvent(key, newValue); } } // Needs to be Serializable, Externalizable or marshallable with Infinispan Externalizers // regardless of cluster or local caches static class CustomEvent implements Serializable { final Integer key; final String value; CustomEvent(Integer key, String value) { this.key = key; this.value = value; } }",
"import org.infinispan.client.hotrod.annotation.*; import org.infinispan.client.hotrod.event.*; @ClientListener public class CustomEventPrintListener { @ClientCacheEntryCreated @ClientCacheEntryModified @ClientCacheEntryRemoved public void handleCustomEvent(ClientCacheEntryCustomEvent<CustomEvent> e) { System.out.println(e); } }",
"@ClientListener(converterFactoryName = \"static-converter\") public class CustomEventPrintListener { ... }",
"RemoteCache<?, ?> cache = cache.addClientListener(new CustomEventPrintListener());",
"import org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory; import org.infinispan.notifications.cachelistener.filter.CacheEventConverter; @NamedFactory(name = \"dynamic-converter\") class DynamicCacheEventConverterFactory implements CacheEventConverterFactory { public CacheEventConverter<Integer, String, CustomEvent> getConverter(final Object[] params) { return new DynamicCacheEventConverter(params); } } // Serializable, Externalizable or marshallable with Infinispan Externalizers needed when running in a cluster class DynamicCacheEventConverter implements CacheEventConverter<Integer, String, CustomEvent>, Serializable { final Object[] params; DynamicCacheEventConverter(Object[] params) { this.params = params; } public CustomEvent convert(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { // If the key matches a key given via parameter, only send the key information if (params[0].equals(key)) return new CustomEvent(key, null); return new CustomEvent(key, newValue); } }",
"RemoteCache<?, ?> cache = cache.addClientListener(new EventPrintListener(), null, new Object[]{1});",
"import org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory; import org.infinispan.notifications.cachelistener.filter.CacheEventConverter; @NamedFactory(name = \"dynamic-filter-converter\") class DynamicCacheEventFilterConverterFactory implements CacheEventFilterConverterFactory { public CacheEventFilterConverter<Integer, String, CustomEvent> getFilterConverter(final Object[] params) { return new DynamicCacheEventFilterConverter(params); } } // Serializable, Externalizable or marshallable with Infinispan Externalizers needed when running in a cluster // class DynamicCacheEventFilterConverter extends AbstractCacheEventFilterConverter<Integer, String, CustomEvent>, Serializable { final Object[] params; DynamicCacheEventFilterConverter(Object[] params) { this.params = params; } public CustomEvent filterAndConvert(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { // If the key matches a key given via parameter, only send the key information if (params[0].equals(key)) return new CustomEvent(key, null); return new CustomEvent(key, newValue); } }",
"@ClientListener(filterFactoryName = \"dynamic-filter-converter\", converterFactoryName = \"dynamic-filter-converter\") public class CustomEventPrintListener { ... }",
"RemoteCache<?, ?> cache = cache.addClientListener(new CustomEventPrintListener(), new Object[]{1}, null);",
"@ClientCacheFailover public void handleFailover(ClientCacheFailoverEvent e) { }",
"<replicated-cache name=\"hotrodReplTx\"> <locking isolation=\"REPEATABLE_READ\"/> <transaction mode=\"NON_XA\" locking=\"PESSIMISTIC\"/> </replicated-cache>",
"org.infinispan.client.hotrod.configuration.ConfigurationBuilder cb = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder(); //other client configuration parameters cb.transactionTimeout(1, TimeUnit.MINUTES); cb.remoteCache(\"my-cache\") .transactionManagerLookup(GenericTransactionManagerLookup.getInstance()) .transactionMode(TransactionMode.NON_XA);",
"infinispan.client.hotrod.cache.my-cache.transaction.transaction_manager_lookup = org.infinispan.client.hotrod.transaction.lookup.GenericTransactionManagerLookup infinispan.client.hotrod.cache.my-cache.transaction.transaction_mode = NON_XA infinispan.client.hotrod.transaction.timeout = 60000",
"RemoteCache<String, String> cache = TransactionManager tm = tm.begin(); cache.put(\"k\", \"v1\"); tm.commit();",
"RemoteCache<String, String> cache = TransactionManager tm = tm.begin(); cache.put(\"k\", \"v2\"); tm.commit();",
"RemoteCache<String, String> cache = TransactionManager tm = tm.begin(); cache.get(\"k\"); cache.put(\"k\", \"v1\"); tm.commit();",
"RemoteCache<String, String> cache = TransactionManager tm = tm.begin(); cache.get(\"k\"); cache.put(\"k\", \"v2\"); tm.commit();",
"//Configure the transaction manager and transaction mode. org.infinispan.client.hotrod.configuration.ConfigurationBuilder cb = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder(); cb.remoteCache(\"my-cache\") .transactionManagerLookup(RemoteTransactionManagerLookup.getInstance()) .transactionMode(TransactionMode.NON_XA); RemoteCacheManager rcm = new RemoteCacheManager(cb.build()); //The my-cache instance uses the RemoteCacheManager configuration. RemoteCache<String, String> cache = rcm.getCache(\"my-cache\"); //Return the transaction manager that the cache uses. TransactionManager tm = cache.getTransactionManager(); //Perform a simple transaction. tm.begin(); cache.put(\"k1\", \"v1\"); System.out.println(\"K1 value is \" + cache.get(\"k1\")); tm.commit();",
"// create or obtain your RemoteCacheManager RemoteCacheManager manager = ...; // retrieve the MultimapCacheManager MultimapCacheManager multimapCacheManager = RemoteMultimapCacheManagerFactory.from(manager); // retrieve the RemoteMultimapCache RemoteMultimapCache<Integer, String> people = multimapCacheManager.get(\"people\"); // add key - values people.put(\"coders\", \"Will\"); people.put(\"coders\", \"Auri\"); people.put(\"coders\", \"Pedro\"); // retrieve single key with multiple values Collection<String> coders = people.get(\"coders\").join();"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/hot_rod_java_client_guide/hotrod-client-api_hot_rod |
Chapter 17. Inviting users to your RHACS instance | Chapter 17. Inviting users to your RHACS instance By inviting users to Red Hat Advanced Cluster Security for Kubernetes (RHACS), you can ensure that the right users have the appropriate access rights within your cluster. You can invite one or more users by assigning roles and defining the authentication provider. 17.1. Configuring access control and sending invitations By configuring access control in the RHACS portal, you can invite users to your RHACS instance. Procedure In the RHACS portal, go to the Platform Configuration Access Control Auth providers tab, and then click Invite users . In the Invite users dialog box, provide the following information: Emails to invite : Enter one or more email addresses of the users you want to invite. Ensure that they are valid email addresses associated with the intended recipients. Provider : From the drop-down list, select a provider you want to use for each invited user. Important If you have only one authentication provider available, it is selected by default. If multiple authentication providers are available and at least one of them is Red Hat SSO or Default Internal SSO , that provider is selected by default. If multiple authentication providers are available, but none of them is Red Hat SSO or Default Internal SSO , you are prompted to select one manually. If you have not yet set up an authentication provider, a warning message appears and the form is disabled. Click the link, which takes you to the Access Control section to configure an authentication provider. Role : From the drop-down list, select the role to assign to each invited user. Click Invite users . On the confirmation dialog box, you receive a confirmation that the users have been created with the selected role. Copy the one or more email addresses and the message into an email that you create in your own email client, and send it to the users. Click Done . Verification In the RHACS portal, go to the Platform Configuration Access Control Auth providers tab. Select the authentication provider you used to invite users. Scroll down to the Rules section. Verify that the user emails and authentication provider roles have been added to the list. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/configuring/inviting-users-to-your-rhacs-instance |
Applications | Applications Red Hat Advanced Cluster Management for Kubernetes 2.11 Application management | [
"apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: sample-application-set namespace: sample-gitops-namespace spec: generators: - clusterDecisionResource: configMapRef: acm-placement labelSelector: matchLabels: cluster.open-cluster-management.io/placement: sample-application-placement requeueAfterSeconds: 180 template: metadata: name: sample-application-{{name}} spec: project: default sources: [ { repoURL: https://github.com/sampleapp/apprepo.git targetRevision: main path: sample-application } ] destination: namespace: sample-application server: \"{{server}}\" syncPolicy: syncOptions: - CreateNamespace=true - PruneLast=true - Replace=true - ApplyOutOfSyncOnly=true - Validate=false automated: prune: true allowEmpty: true selfHeal: true",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: sample-application-placement namespace: sample-gitops-namespace spec: clusterSets: - sampleclusterset",
"apiVersion: apps.open-cluster-management.io/v1alpha1 kind: SubscriptionStatus metadata: labels: apps.open-cluster-management.io/cluster: <your-managed-cluster> apps.open-cluster-management.io/hosting-subscription: <your-appsub-namespace>.<your-appsub-name> name: <your-appsub-name> namespace: <your-appsub-namespace> statuses: packages: - apiVersion: v1 kind: Service lastUpdateTime: \"2021-09-13T20:12:34Z\" Message: <detailed error. visible only if the package fails> name: frontend namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: \"2021-09-13T20:12:34Z\" name: frontend namespace: test-ns-2 phase: Deployed - apiVersion: v1 kind: Service lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-master namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-master namespace: test-ns-2 phase: Deployed - apiVersion: v1 kind: Service lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-slave namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-slave namespace: test-ns-2 phase: Deployed subscription: lastUpdateTime: \"2021-09-13T20:12:34Z\" phase: Deployed",
"apiVersion: apps.open-cluster-management.io/v1alpha1 kind: subscriptionReport metadata: labels: apps.open-cluster-management.io/cluster: \"true\" name: <your-managed-cluster-1> namespace: <your-managed-cluster-1> reportType: Cluster results: - result: deployed source: appsub-1-ns/appsub-1 // appsub 1 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362 - result: failed source: appsub-2-ns/appsub-2 // appsub 2 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362 - result: propagationFailed source: appsub-3-ns/appsub-3 // appsub 3 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362",
"apiVersion: apps.open-cluster-management.io/v1alpha1 kind: subscriptionReport metadata: labels: apps.open-cluster-management.io/hosting-subscription: <your-appsub-namespace>.<your-appsub-name> name: <your-appsub-name> namespace: <your-appsub-namespace> reportType: Application resources: - apiVersion: v1 kind: Service name: redis-master2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: redis-master2 namespace: playback-ns-2 - apiVersion: v1 kind: Service name: redis-slave2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: redis-slave2 namespace: playback-ns-2 - apiVersion: v1 kind: Service name: frontend2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: frontend2 namespace: playback-ns-2 results: - result: deployed source: cluster-1 //cluster 1 status timestamp: nanos: 0 seconds: 0 - result: failed source: cluster-3 //cluster 2 status timestamp: nanos: 0 seconds: 0 - result: propagationFailed source: cluster-4 //cluster 3 status timestamp: nanos: 0 seconds: 0 summary: deployed: 8 failed: 1 inProgress: 0 propagationFailed: 1 clusters: 10",
"% oc get managedclusterview -n <failing-clusternamespace> \"<app-name>-<app name>\"",
"% getAppSubStatus.sh -c <your-managed-cluster> -s <your-appsub-namespace> -n <your-appsub-name>",
"% getLastUpdateTime.sh -c <your-managed-cluster> -s <your-appsub-namespace> -n <your-appsub-name>",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: apps.open-cluster-management.io/do-not-delete: 'true' apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example apps.open-cluster-management.io/reconcile-option: merge pv.kubernetes.io/bind-completed: \"yes\"",
"apiVersion: v1 kind: Namespace metadata: name: hub-repo --- apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm namespace: hub-repo spec: pathname: [https://kubernetes-charts.storage.googleapis.com/] # URL references a valid chart URL. type: HelmRepo",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: dev namespace: ch-obj spec: type: Object storage pathname: [http://sample-ip:#####/dev] # URL is appended with the valid bucket name, which matches the channel name. secretRef: name: miniosecret gates: annotations: dev-ready: true",
"https://s3.console.aws.amazon.com/s3/buckets/sample-bucket-1 s3://sample-bucket-1/ https://sample-bucket-1.s3.amazonaws.com/",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: object-dev namespace: ch-object-dev spec: type: ObjectBucket pathname: https://s3.console.aws.amazon.com/s3/buckets/sample-bucket-1 secretRef: name: secret-dev --- apiVersion: v1 kind: Secret metadata: name: secret-dev namespace: ch-object-dev stringData: AccessKeyID: <your AWS bucket access key id> SecretAccessKey: <your AWS bucket secret access key> Region: <your AWS bucket region> type: Opaque",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: towhichcluster namespace: obj-sub-ns spec: clusterSelector: {} --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: obj-sub namespace: obj-sub-ns spec: channel: ch-object-dev/object-dev placement: placementRef: kind: PlacementRule name: towhichcluster",
"annotations: apps.open-cluster-management.io/bucket-path: <subfolder-1>",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/bucket-path: subfolder1 name: obj-sub namespace: obj-sub-ns labels: name: obj-sub spec: channel: ch-object-dev/object-dev placement: placementRef: kind: PlacementRule name: towhichcluster",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: apps.open-cluster-management.io/do-not-delete: 'true' apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example apps.open-cluster-management.io/reconcile-option: merge pv.kubernetes.io/bind-completed: \"yes\"",
"apiVersion: v1 kind: Secret metadata: name: toweraccess namespace: same-as-subscription type: Opaque stringData: token: ansible-tower-api-token host: https://ansible-tower-host-url",
"apply -f",
"apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: name: demo-job-001 namespace: default spec: tower_auth_secret: toweraccess job_template_name: Demo Job Template extra_vars: cost: 6.88 ghosts: [\"inky\",\"pinky\",\"clyde\",\"sue\"] is_enable: false other_variable: foo pacman: mrs size: 8 targets_list: - aaa - bbb - ccc version: 1.23.45 job_tags: \"provision,install,configuration\" skip_tags: \"configuration,restart\"",
"apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: name: demo-job-001 namespace: default spec: tower_auth_secret: toweraccess workflow_template_name: Demo Workflow Template extra_vars: cost: 6.88 ghosts: [\"inky\",\"pinky\",\"clyde\",\"sue\"] is_enable: false other_variable: foo pacman: mrs size: 8 targets_list: - aaa - bbb - ccc version: 1.23.45",
"apiVersion: `image.openshift.io/v1` kind: ImageStream metadata: name: default namespace: default spec: lookupPolicy: local: true tags: - name: 'latest' from: kind: DockerImage name: 'quay.io/repository/open-cluster-management/multicluster-operators-subscription:community-latest'",
"--- apiVersion: v1 kind: Namespace metadata: name: multins --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: multins data: path: resource1 --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-2 namespace: default data: path: resource2 --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-3 data: path: resource3",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: subscription-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: merge apps.open-cluster-management.io/current-namespace-scoped: \"true\" spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters",
"apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: name: user1 age: 19",
"apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: age: 20",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: merge spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters",
"apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: name: user1 age: 20",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: mergeAndOwn spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters",
"apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns annotations: apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example data: name: user1 age: 20",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: replace spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters",
"apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: age: 20",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-branch: <branch1>",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-desired-commit: <full commit number> apps.open-cluster-management.io/git-clone-depth: 100",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-tag: <v1.0> apps.open-cluster-management.io/git-clone-depth: 100",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: open-cluster-management:subscription-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: open-cluster-management:subscription-admin",
"edit clusterrolebinding open-cluster-management:subscription-admin",
"subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: example-name - apiGroup: rbac.authorization.k8s.io kind: Group name: example-group-name - kind: ServiceAccount name: my-service-account namespace: my-service-account-namespace - apiGroup: rbac.authorization.k8s.io kind: User name: 'system:serviceaccount:my-service-account-namespace:my-service-account'",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/github-path: sub2 name: demo-subscription namespace: demo-ns spec: channel: demo-ns/somechannel allow: - apiVersion: policy.open-cluster-management.io/v1 kinds: - Policy - apiVersion: v1 kinds: - Deployment deny: - apiVersion: v1 kinds: - Service - ConfigMap placement: local: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/github-path: myapplication name: demo-subscription namespace: demo-ns spec: channel: demo-ns/somechannel deny: - apiVersion: v1 kinds: - Service - ConfigMap placement: placementRef: name: demo-placement kind: Placement",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: git-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: <value from the list> spec: type: GitHub pathname: <Git URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-subscription annotations: apps.open-cluster-management.io/git-path: <application1> apps.open-cluster-management.io/git-branch: <branch1> spec: channel: sample/git-channel placement: local: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: git-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: high spec: type: GitHub pathname: <Git URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-subscription annotations: apps.open-cluster-management.io/git-path: application1 apps.open-cluster-management.io/git-branch: branch1 apps.open-cluster-management.io/reconcile-rate: \"off\" spec: channel: sample/git-channel placement: local: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: low spec: type: HelmRepo pathname: <Helm repo URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: helm-subscription spec: channel: sample/helm-channel name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: nginx-ingress-simple packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: high spec: type: HelmRepo pathname: <Helm repo URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: helm-subscription annotations: apps.open-cluster-management.io/reconcile-rate: \"off\" spec: channel: sample/helm-channel name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: nginx-ingress-simple packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: true",
"annotate mch -n open-cluster-management multiclusterhub mch-pause=true --overwrite=true",
"edit deployment -n open-cluster-management multicluster-operators-hub-subscription",
"annotate mch -n open-cluster-management multiclusterhub mch-pause=false --overwrite=true",
"command: - /usr/local/bin/multicluster-operators-subscription - --sync-interval=60 - --retry-period=52",
"apiVersion: v1 kind: Secret metadata: name: my-git-secret namespace: channel-ns data: user: dXNlcgo= accessToken: cGFzc3dvcmQK",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: sample-channel namespace: channel-ns spec: type: Git pathname: <Git HTTPS URL> secretRef: name: my-git-secret",
"x509: certificate is valid for localhost.com, not localhost",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: labels: name: sample-channel namespace: sample spec: type: GitHub pathname: <Git HTTPS URL> insecureSkipVerify: true",
"apiVersion: v1 kind: ConfigMap metadata: name: git-ca namespace: channel-ns data: caCerts: | # Git server root CA -----BEGIN CERTIFICATE----- MIIF5DCCA8wCCQDInYMol7LSDTANBgkqhkiG9w0BAQsFADCBszELMAkGA1UEBhMC Q0ExCzAJBgNVBAgMAk9OMRAwDgYDVQQHDAdUb3JvbnRvMQ8wDQYDVQQKDAZSZWRI YXQxDDAKBgNVBAsMA0FDTTFFMEMGA1UEAww8Z29ncy1zdmMtZGVmYXVsdC5hcHBz LnJqdW5nLWh1YjEzLmRldjA2LnJlZC1jaGVzdGVyZmllbGQuY29tMR8wHQYJKoZI hvcNAQkBFhByb2tlakByZWRoYXQuY29tMB4XDTIwMTIwMzE4NTMxMloXDTIzMDky MzE4NTMxMlowgbMxCzAJBgNVBAYTAkNBMQswCQYDVQQIDAJPTjEQMA4GA1UEBwwH VG9yb250bzEPMA0GA1UECgwGUmVkSGF0MQwwCgYDVQQLDANBQ00xRTBDBgNVBAMM PGdvZ3Mtc3ZjLWRlZmF1bHQuYXBwcy5yanVuZy1odWIxMy5kZXYwNi5yZWQtY2hl c3RlcmZpZWxkLmNvbTEfMB0GCSqGSIb3DQEJARYQcm9rZWpAcmVkaGF0LmNvbTCC AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAM3nPK4mOQzaDAo6S3ZJ0Ic3 U9p/NLodnoTIC+cn0q8qNCAjf13zbGB3bfN9Zxl8Q5fv+wYwHrUOReCp6U/InyQy 6OS3gj738F635inz1KdyhKtlWW2p9Ye9DUtx1IlfHkDVdXtynjHQbsFNIdRHcpQP upM5pwPC3BZXqvXChhlfAy2m4yu7vy0hO/oTzWIwNsoL5xt0Lw4mSyhlEip/t8lU xn2y8qhm7MiIUpXuwWhSYgCrEVqmTcB70Pc2YRZdSFolMN9Et70MjQN0TXjoktH8 PyASJIKIRd+48yROIbUn8rj4aYYBsJuoSCjJNwujZPbqseqUr42+v+Qp2bBj1Sjw +SEZfHTvSv8AqX0T6eo6njr578+DgYlwsS1A1zcAdzp8qmDGqvJDzwcnQVFmvaoM gGHCdJihfy3vDhxuZRDse0V4Pz6tl6iklM+tHrJL/bdL0NdfJXNCqn2nKrM51fpw diNXs4Zn3QSStC2x2hKnK+Q1rwCSEg/lBawgxGUslTboFH77a+Kwu4Oug9ibtm5z ISs/JY4Kiy4C2XJOltOR2XZYkdKaX4x3ctbrGaD8Bj+QHiSAxaaSXIX+VbzkHF2N aD5ijFUopjQEKFrYh3O93DB/URIQ+wHVa6+Kvu3uqE0cg6pQsLpbFVQ/I8xHvt9L kYy6z6V/nj9ZYKQbq/kPAgMBAAEwDQYJKoZIhvcNAQELBQADggIBAKZuc+lewYAv jaaSeRDRoToTb/yN0Xsi69UfK0aBdvhCa7/0rPHcv8hmUBH3YgkZ+CSA5ygajtL4 g2E8CwIO9ZjZ6l+pHCuqmNYoX1wdjaaDXlpwk8hGTSgy1LsOoYrC5ZysCi9Jilu9 PQVGs/vehQRqLV9uZBigG6oZqdUqEimaLHrOcEAHB5RVcnFurz0qNbT+UySjsD63 9yJdCeQbeKAR9SC4hG13EbM/RZh0lgFupkmGts7QYULzT+oA0cCJpPLQl6m6qGyE kh9aBB7FLykK1TeXVuANlNU4EMyJ/e+uhNkS9ubNJ3vuRuo+ECHsha058yi16JC9 NkZqP+df4Hp85sd+xhrgYieq7QGX2KOXAjqAWo9htoBhOyW3mm783A7WcOiBMQv0 2UGZxMsRjlP6UqB08LsV5ZBAefElR344sokJR1de/Sx2J9J/am7yOoqbtKpQotIA XSUkATuuQw4ctyZLDkUpzrDzgd2Bt+aawF6sD2YqycaGFwv2YD9t1YlD6F4Wh8Mc 20Qu5EGrkQTCWZ9pOHNSa7YQdmJzwbxJC4hqBpBRAJFI2fAIqFtyum6/8ZN9nZ9K FSEKdlu+xeb6Y6xYt0mJJWF6mCRi4i7IL74EU/VNXwFmfP6IadliUOST3w5t92cB M26t73UCExXMXTCQvnp0ki84PeR1kRk4 -----END CERTIFICATE----- # Git server intermediate CA 1 -----BEGIN CERTIFICATE----- MIIF5DCCA8wCCQDInYMol7LSDTANBgkqhkiG9w0BAQsFADCBszELMAkGA1UEBhMC Q0ExCzAJBgNVBAgMAk9OMRAwDgYDVQQHDAdUb3JvbnRvMQ8wDQYDVQQKDAZSZWRI YXQxDDAKBgNVBAsMA0FDTTFFMEMGA1UEAww8Z29ncy1zdmMtZGVmYXVsdC5hcHBz LnJqdW5nLWh1YjEzLmRldjA2LnJlZC1jaGVzdGVyZmllbGQuY29tMR8wHQYJKoZI hvcNAQkBFhByb2tlakByZWRoYXQuY29tMB4XDTIwMTIwMzE4NTMxMloXDTIzMDky MzE4NTMxMlowgbMxCzAJBgNVBAYTAkNBMQswCQYDVQQIDAJPTjEQMA4GA1UEBwwH VG9yb250bzEPMA0GA1UECgwGUmVkSGF0MQwwCgYDVQQLDANBQ00xRTBDBgNVBAMM PGdvZ3Mtc3ZjLWRlZmF1bHQuYXBwcy5yanVuZy1odWIxMy5kZXYwNi5yZWQtY2hl c3RlcmZpZWxkLmNvbTEfMB0GCSqGSIb3DQEJARYQcm9rZWpAcmVkaGF0LmNvbTCC AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAM3nPK4mOQzaDAo6S3ZJ0Ic3 U9p/NLodnoTIC+cn0q8qNCAjf13zbGB3bfN9Zxl8Q5fv+wYwHrUOReCp6U/InyQy 6OS3gj738F635inz1KdyhKtlWW2p9Ye9DUtx1IlfHkDVdXtynjHQbsFNIdRHcpQP upM5pwPC3BZXqvXChhlfAy2m4yu7vy0hO/oTzWIwNsoL5xt0Lw4mSyhlEip/t8lU xn2y8qhm7MiIUpXuwWhSYgCrEVqmTcB70Pc2YRZdSFolMN9Et70MjQN0TXjoktH8 PyASJIKIRd+48yROIbUn8rj4aYYBsJuoSCjJNwujZPbqseqUr42+v+Qp2bBj1Sjw +SEZfHTvSv8AqX0T6eo6njr578+DgYlwsS1A1zcAdzp8qmDGqvJDzwcnQVFmvaoM gGHCdJihfy3vDhxuZRDse0V4Pz6tl6iklM+tHrJL/bdL0NdfJXNCqn2nKrM51fpw diNXs4Zn3QSStC2x2hKnK+Q1rwCSEg/lBawgxGUslTboFH77a+Kwu4Oug9ibtm5z ISs/JY4Kiy4C2XJOltOR2XZYkdKaX4x3ctbrGaD8Bj+QHiSAxaaSXIX+VbzkHF2N aD5ijFUopjQEKFrYh3O93DB/URIQ+wHVa6+Kvu3uqE0cg6pQsLpbFVQ/I8xHvt9L kYy6z6V/nj9ZYKQbq/kPAgMBAAEwDQYJKoZIhvcNAQELBQADggIBAKZuc+lewYAv jaaSeRDRoToTb/yN0Xsi69UfK0aBdvhCa7/0rPHcv8hmUBH3YgkZ+CSA5ygajtL4 g2E8CwIO9ZjZ6l+pHCuqmNYoX1wdjaaDXlpwk8hGTSgy1LsOoYrC5ZysCi9Jilu9 PQVGs/vehQRqLV9uZBigG6oZqdUqEimaLHrOcEAHB5RVcnFurz0qNbT+UySjsD63 9yJdCeQbeKAR9SC4hG13EbM/RZh0lgFupkmGts7QYULzT+oA0cCJpPLQl6m6qGyE kh9aBB7FLykK1TeXVuANlNU4EMyJ/e+uhNkS9ubNJ3vuRuo+ECHsha058yi16JC9 NkZqP+df4Hp85sd+xhrgYieq7QGX2KOXAjqAWo9htoBhOyW3mm783A7WcOiBMQv0 2UGZxMsRjlP6UqB08LsV5ZBAefElR344sokJR1de/Sx2J9J/am7yOoqbtKpQotIA XSUkATuuQw4ctyZLDkUpzrDzgd2Bt+aawF6sD2YqycaGFwv2YD9t1YlD6F4Wh8Mc 20Qu5EGrkQTCWZ9pOHNSa7YQdmJzwbxJC4hqBpBRAJFI2fAIqFtyum6/8ZN9nZ9K FSEKdlu+xeb6Y6xYt0mJJWF6mCRi4i7IL74EU/VNXwFmfP6IadliUOST3w5t92cB M26t73UCExXMXTCQvnp0ki84PeR1kRk4 -----END CERTIFICATE----- # Git server intermediate CA 2 -----BEGIN CERTIFICATE----- MIIF5DCCA8wCCQDInYMol7LSDTANBgkqhkiG9w0BAQsFADCBszELMAkGA1UEBhMC Q0ExCzAJBgNVBAgMAk9OMRAwDgYDVQQHDAdUb3JvbnRvMQ8wDQYDVQQKDAZSZWRI YXQxDDAKBgNVBAsMA0FDTTFFMEMGA1UEAww8Z29ncy1zdmMtZGVmYXVsdC5hcHBz LnJqdW5nLWh1YjEzLmRldjA2LnJlZC1jaGVzdGVyZmllbGQuY29tMR8wHQYJKoZI hvcNAQkBFhByb2tlakByZWRoYXQuY29tMB4XDTIwMTIwMzE4NTMxMloXDTIzMDky MzE4NTMxMlowgbMxCzAJBgNVBAYTAkNBMQswCQYDVQQIDAJPTjEQMA4GA1UEBwwH VG9yb250bzEPMA0GA1UECgwGUmVkSGF0MQwwCgYDVQQLDANBQ00xRTBDBgNVBAMM PGdvZ3Mtc3ZjLWRlZmF1bHQuYXBwcy5yanVuZy1odWIxMy5kZXYwNi5yZWQtY2hl c3RlcmZpZWxkLmNvbTEfMB0GCSqGSIb3DQEJARYQcm9rZWpAcmVkaGF0LmNvbTCC AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAM3nPK4mOQzaDAo6S3ZJ0Ic3 U9p/NLodnoTIC+cn0q8qNCAjf13zbGB3bfN9Zxl8Q5fv+wYwHrUOReCp6U/InyQy 6OS3gj738F635inz1KdyhKtlWW2p9Ye9DUtx1IlfHkDVdXtynjHQbsFNIdRHcpQP upM5pwPC3BZXqvXChhlfAy2m4yu7vy0hO/oTzWIwNsoL5xt0Lw4mSyhlEip/t8lU xn2y8qhm7MiIUpXuwWhSYgCrEVqmTcB70Pc2YRZdSFolMN9Et70MjQN0TXjoktH8 PyASJIKIRd+48yROIbUn8rj4aYYBsJuoSCjJNwujZPbqseqUr42+v+Qp2bBj1Sjw +SEZfHTvSv8AqX0T6eo6njr578+DgYlwsS1A1zcAdzp8qmDGqvJDzwcnQVFmvaoM gGHCdJihfy3vDhxuZRDse0V4Pz6tl6iklM+tHrJL/bdL0NdfJXNCqn2nKrM51fpw diNXs4Zn3QSStC2x2hKnK+Q1rwCSEg/lBawgxGUslTboFH77a+Kwu4Oug9ibtm5z ISs/JY4Kiy4C2XJOltOR2XZYkdKaX4x3ctbrGaD8Bj+QHiSAxaaSXIX+VbzkHF2N aD5ijFUopjQEKFrYh3O93DB/URIQ+wHVa6+Kvu3uqE0cg6pQsLpbFVQ/I8xHvt9L kYy6z6V/nj9ZYKQbq/kPAgMBAAEwDQYJKoZIhvcNAQELBQADggIBAKZuc+lewYAv jaaSeRDRoToTb/yN0Xsi69UfK0aBdvhCa7/0rPHcv8hmUBH3YgkZ+CSA5ygajtL4 g2E8CwIO9ZjZ6l+pHCuqmNYoX1wdjaaDXlpwk8hGTSgy1LsOoYrC5ZysCi9Jilu9 PQVGs/vehQRqLV9uZBigG6oZqdUqEimaLHrOcEAHB5RVcnFurz0qNbT+UySjsD63 9yJdCeQbeKAR9SC4hG13EbM/RZh0lgFupkmGts7QYULzT+oA0cCJpPLQl6m6qGyE kh9aBB7FLykK1TeXVuANlNU4EMyJ/e+uhNkS9ubNJ3vuRuo+ECHsha058yi16JC9 NkZqP+df4Hp85sd+xhrgYieq7QGX2KOXAjqAWo9htoBhOyW3mm783A7WcOiBMQv0 2UGZxMsRjlP6UqB08LsV5ZBAefElR344sokJR1de/Sx2J9J/am7yOoqbtKpQotIA XSUkATuuQw4ctyZLDkUpzrDzgd2Bt+aawF6sD2YqycaGFwv2YD9t1YlD6F4Wh8Mc 20Qu5EGrkQTCWZ9pOHNSa7YQdmJzwbxJC4hqBpBRAJFI2fAIqFtyum6/8ZN9nZ9K FSEKdlu+xeb6Y6xYt0mJJWF6mCRi4i7IL74EU/VNXwFmfP6IadliUOST3w5t92cB M26t73UCExXMXTCQvnp0ki84PeR1kRk4 -----END CERTIFICATE-----",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: configMapRef: name: git-ca pathname: <Git HTTPS URL> type: Git",
"apiVersion: v1 kind: Secret metadata: name: git-ssh-key namespace: channel-ns data: sshKey: LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQ21GbGN6STFOaTFqZEhJQUFBQUdZbU55ZVhCMEFBQUFHQUFBQUJDK3YySHhWSIwCm8zejh1endzV3NWODMvSFVkOEtGeVBmWk5OeE5TQUgcFA3Yk1yR2tlRFFPd3J6MGIKOUlRM0tKVXQzWEE0Zmd6NVlrVFVhcTJsZWxxVk1HcXI2WHF2UVJ5Mkc0NkRlRVlYUGpabVZMcGVuaGtRYU5HYmpaMmZOdQpWUGpiOVhZRmd4bTNnYUpJU3BNeTFLWjQ5MzJvOFByaDZEdzRYVUF1a28wZGdBaDdndVpPaE53b0pVYnNmYlZRc0xMS1RrCnQwblZ1anRvd2NEVGx4TlpIUjcwbGVUSHdGQTYwekM0elpMNkRPc3RMYjV2LzZhMjFHRlMwVmVXQ3YvMlpMOE1sbjVUZWwKSytoUWtxRnJBL3BUc1ozVXNjSG1GUi9PV25FPQotLS0tLUVORCBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0K passphrase: cGFzc3cwcmQK type: Opaque",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: secretRef: name: git-ssh-key pathname: <Git SSH URL> type: Git",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: secretRef: name: git-ssh-key pathname: <Git SSH URL> type: Git insecureSkipVerify: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 spec: watchHelmNamespaceScopedResources: true channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\"",
"packageOverrides: - packageName: nginx-ingress packageOverrides: - path: spec value: my-override-values 1",
"packageOverrides: - packageName: nginx-ingress packageAlias: my-helm-release-name",
"apply -f filename.yaml",
"get application.app",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: namespace: # Each channel needs a unique namespace, except Git channel. spec: sourceNamespaces: type: pathname: secretRef: name: gates: annotations: labels:",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: dev namespace: ch-obj spec: type: ObjectBucket pathname: [http://9.28.236.243:xxxx/dev] # URL is appended with the valid bucket name, which matches the channel name. secretRef: name: miniosecret gates: annotations: dev-ready: true",
"apiVersion: v1 kind: Namespace metadata: name: hub-repo --- apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: Helm namespace: hub-repo spec: pathname: [https://9.21.107.150:8443/helm-repo/charts] # URL references a valid chart URL. insecureSkipVerify: true type: HelmRepo",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: hive-cluster-gitrepo namespace: gitops-cluster-lifecycle spec: type: Git pathname: https://github.com/open-cluster-management/gitops-clusters.git secretRef: name: github-gitops-clusters --- apiVersion: v1 kind: Secret metadata: name: github-gitops-clusters namespace: gitops-cluster-lifecycle data: user: dXNlcgo= # Value of user and accessToken is Base 64 coded. accessToken: cGFzc3dvcmQ",
"apply -f filename.yaml",
"get application.app",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: namespace: labels: spec: sourceNamespace: source: channel: name: packageFilter: version: labelSelector: matchLabels: package: component: annotations: packageOverrides: - packageName: packageAlias: - path: value: placement: local: clusters: name: clusterSelector: placementRef: name: kind: Placement overrides: clusterName: clusterOverrides: path: value:",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\" placement: placementRef: kind: Placement name: towhichcluster overrides: - clusterName: \"/\" clusterOverrides: - path: \"metadata.namespace\" value: default",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch secondaryChannel: ns-ch-2/predev-ch-2 name: nginx-ingress",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\" placement: placementRef: kind: Placement name: towhichcluster timewindow: windowtype: \"active\" location: \"America/Los_Angeles\" daysofweek: [\"Monday\", \"Wednesday\", \"Friday\"] hours: - start: \"10:20AM\" end: \"10:30AM\" - start: \"12:40PM\" end: \"1:40PM\"",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: simple namespace: default spec: channel: ns-ch/predev-ch name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: my-nginx-ingress-releaseName packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: false",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\" placement: clusters: - name: my-development-cluster-1 packageOverrides: - packageName: my-server-integration-prod packageOverrides: - path: spec value: persistence: enabled: false useDynamicProvisioning: false license: accept tls: hostname: my-mcm-cluster.icp sso: registrationImage: pullSecret: hub-repo-docker-secret",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: sample-subscription namespace: default annotations: apps.open-cluster-management.io/git-path: sample_app_1/dir1 apps.open-cluster-management.io/git-branch: branch1 spec: channel: default/sample-channel placement: placementRef: kind: Placement name: dev-clusters",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: example-subscription namespace: default spec: channel: some/channel packageOverrides: - packageName: kustomization packageOverrides: - value: | patchesStrategicMerge: - patch.yaml",
"create route passthrough --service=multicluster-operators-subscription -n open-cluster-management",
"apiVersion: v1 kind: Secret metadata: name: my-github-webhook-secret data: secret: BASE64_ENCODED_SECRET",
"annotate channel.apps.open-cluster-management.io <channel name> apps.open-cluster-management.io/webhook-enabled=\"true\"",
"annotate channel.apps.open-cluster-management.io <channel name> apps.open-cluster-management.io/webhook-secret=\"<the_secret_name>\"",
"apply -f filename.yaml",
"get application.app",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: namespace: resourceVersion: labels: app: chart: release: heritage: selfLink: uid: spec: clusterSelector: matchLabels: datacenter: environment: clusterReplicas: clusterConditions: ResourceHint: type: order: Policies:",
"status: decisions: clusterName: clusterNamespace:",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: gbapp-gbapp namespace: development labels: app: gbapp spec: clusterSelector: matchLabels: environment: Dev clusterReplicas: 1 status: decisions: - clusterName: local-cluster clusterNamespace: local-cluster",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: towhichcluster namespace: ns-sub-1 labels: app: nginx-app-details spec: clusterReplicas: 1 clusterConditions: - type: ManagedClusterConditionAvailable status: \"True\" clusterSelector: matchExpressions: - key: environment operator: In values: - dev",
"apply -f filename.yaml",
"get application.app",
"apiVersion: app.k8s.io/v1beta1 kind: Application metadata: name: namespace: spec: selector: matchLabels: label_name: label_value",
"apiVersion: app.k8s.io/v1beta1 kind: Application metadata: name: my-application namespace: my-namespace spec: selector: matchLabels: my-label: my-label-value"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/applications/index |
Chapter 5. KubeletConfig [machineconfiguration.openshift.io/v1] | Chapter 5. KubeletConfig [machineconfiguration.openshift.io/v1] Description KubeletConfig describes a customized Kubelet configuration. Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object KubeletConfigSpec defines the desired state of KubeletConfig status object KubeletConfigStatus defines the observed state of a KubeletConfig 5.1.1. .spec Description KubeletConfigSpec defines the desired state of KubeletConfig Type object Property Type Description autoSizingReserved boolean Automatically set optimal system reserved kubeletConfig `` The fields of the kubelet configuration are defined in kubernetes upstream. Please refer to the types defined in the version/commit used by OpenShift of the upstream kubernetes. It's important to note that, since the fields of the kubelet configuration are directly fetched from upstream the validation of those values is handled directly by the kubelet. Please refer to the upstream version of the relavent kubernetes for the valid values of these fields. Invalid values of the kubelet configuration fields may render cluster nodes unusable. logLevel integer logLevel defines the log level of the Kubelet machineConfigPoolSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. tlsSecurityProfile object tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. Note that the minimum TLS version for ingress controllers is 1.1, and the maximum TLS version is 1.2. An implication of this restriction is that the Modern TLS profile type cannot be used because it requires TLS 1.3. 5.1.2. .spec.machineConfigPoolSelector Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 5.1.3. .spec.machineConfigPoolSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 5.1.4. .spec.machineConfigPoolSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 5.1.5. .spec.tlsSecurityProfile Description tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. Note that the minimum TLS version for ingress controllers is 1.1, and the maximum TLS version is 1.2. An implication of this restriction is that the Modern TLS profile type cannot be used because it requires TLS 1.3. Type object Property Type Description custom `` custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate `` intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 minTLSVersion: TLSv1.2 modern `` modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: TLSv1.3 NOTE: Currently unsupported. old `` old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ciphers: - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA - TLS_RSA_WITH_AES_128_GCM_SHA256 - TLS_RSA_WITH_AES_256_GCM_SHA384 - TLS_RSA_WITH_AES_128_CBC_SHA256 - TLS_RSA_WITH_AES_128_CBC_SHA - TLS_RSA_WITH_AES_256_CBC_SHA - TLS_RSA_WITH_3DES_EDE_CBC_SHA minTLSVersion: TLSv1.0 type string type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. Note that the Modern profile is currently not supported because it is not yet well adopted by common software libraries. 5.1.6. .status Description KubeletConfigStatus defines the observed state of a KubeletConfig Type object Property Type Description conditions array conditions represents the latest available observations of current state. conditions[] object KubeletConfigCondition defines the state of the KubeletConfig observedGeneration integer observedGeneration represents the generation observed by the controller. 5.1.7. .status.conditions Description conditions represents the latest available observations of current state. Type array 5.1.8. .status.conditions[] Description KubeletConfigCondition defines the state of the KubeletConfig Type object Property Type Description lastTransitionTime `` lastTransitionTime is the time of the last update to the current status object. message string message provides additional information about the current condition. This is only to be consumed by humans. reason string reason is the reason for the condition's last transition. Reasons are PascalCase status string status of the condition, one of True, False, Unknown. type string type specifies the state of the operator's reconciliation functionality. 5.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs DELETE : delete collection of KubeletConfig GET : list objects of kind KubeletConfig POST : create a KubeletConfig /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name} DELETE : delete a KubeletConfig GET : read the specified KubeletConfig PATCH : partially update the specified KubeletConfig PUT : replace the specified KubeletConfig /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name}/status GET : read status of the specified KubeletConfig PATCH : partially update status of the specified KubeletConfig PUT : replace status of the specified KubeletConfig 5.2.1. /apis/machineconfiguration.openshift.io/v1/kubeletconfigs Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of KubeletConfig Table 5.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeletConfig Table 5.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.5. HTTP responses HTTP code Reponse body 200 - OK KubeletConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeletConfig Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body KubeletConfig schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 201 - Created KubeletConfig schema 202 - Accepted KubeletConfig schema 401 - Unauthorized Empty 5.2.2. /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name} Table 5.9. Global path parameters Parameter Type Description name string name of the KubeletConfig Table 5.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a KubeletConfig Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.12. Body parameters Parameter Type Description body DeleteOptions schema Table 5.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeletConfig Table 5.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.15. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeletConfig Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body Patch schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeletConfig Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body KubeletConfig schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 201 - Created KubeletConfig schema 401 - Unauthorized Empty 5.2.3. /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name}/status Table 5.22. Global path parameters Parameter Type Description name string name of the KubeletConfig Table 5.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified KubeletConfig Table 5.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.25. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeletConfig Table 5.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.27. Body parameters Parameter Type Description body Patch schema Table 5.28. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeletConfig Table 5.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.30. Body parameters Parameter Type Description body KubeletConfig schema Table 5.31. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 201 - Created KubeletConfig schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_apis/kubeletconfig-machineconfiguration-openshift-io-v1 |
Chapter 58. Security | Chapter 58. Security OpenSCAP rpmverifypackage does not work correctly The chdir and chroot system calls are called twice by the rpmverifypackage probe. Consequently, an error occurs when the probe is utilized during an OpenSCAP scan with custom Open Vulnerability and Assessment Language (OVAL) content. To work around this problem, do not use the rpmverifypackage_test OVAL test in your content or use only the content from the scap-security-guide package where rpmverifypackage_test is not used. (BZ# 1603347 ) dconf databases are not checked by OVAL OVAL (Open Vulnerability and Assessment Language) checks used in the SCAP Security Guide project are not able to read a dconf binary database, only files used to generate the database. The database is not regenerated automatically, the administrator needs to enter the dconf update command. As a consequence, changes to the database that are not made using files in the /etc/dconf/db/ directory cannot be detected by scanning. This may cause false negatives results. To work around this problem, run dconf update periodically, for example, using the /etc/crontab configuration file. (BZ# 1631378 ) SCAP Workbench fails to generate results-based remediations from tailored profiles The following error occurs when trying to generate results-based remediation roles from a customized profile using the the SCAP Workbench tool: To work around this problem, use the oscap command with the --tailoring-file option. (BZ# 1533108 ) OpenSCAP scanner results contain a lot of SELinux context error messages The OpenSCAP scanner logs inability to get SELinux context on the ERROR level even in situations where it is not a true error. As a result, OpenSCAP scanner results contain a lot of SELinux context error messages. Both the oscap command-line utility and the SCAP Workbench graphical utility outputs can be hard to read for that reason. (BZ# 1640522 ) oscap scans use an excessive amount of memory Result data of Open Vulnerability Assessment Language (OVAL) probes are kept in memory for the whole duration of a scan and the generation of reports is also a memory-intensive process. Consequently, when very large file systems are scanned, the oscap process can take all available memory and be killed by the operating system. To work around this problem, use tailoring to exclude rules that scan complete file systems and run them separately. Furthermore, do not use the --oval-results option. As a result, if you lower the amount of processed data, scanning of the system should no longer crash because of the excessive use of memory. (BZ#1548949) | [
"Error generating remediation role '.../remediation.sh': Exit code of 'oscap' was 1: [output truncated]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/known_issues_security |
Chapter 59. JmxTransSpec schema reference | Chapter 59. JmxTransSpec schema reference The type JmxTransSpec has been deprecated. Used in: KafkaSpec Property Property type Description image string The image to use for the JmxTrans. outputDefinitions JmxTransOutputDefinitionTemplate array Defines the output hosts that will be referenced later on. For more information on these properties see, JmxTransOutputDefinitionTemplate schema reference . logLevel string Sets the logging level of the JmxTrans deployment.For more information see, JmxTrans Logging Level . kafkaQueries JmxTransQueryTemplate array Queries to send to the Kafka brokers to define what data should be read from each broker. For more information on these properties see, JmxTransQueryTemplate schema reference . resources ResourceRequirements CPU and memory resources to reserve. template JmxTransTemplate Template for JmxTrans resources. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-JmxTransSpec-reference |
13.20. Installation Complete | 13.20. Installation Complete Congratulations! Your Red Hat Enterprise Linux installation is now complete! Click the Reboot button to reboot your system and begin using Red Hat Enterprise Linux. Remember to remove any installation media if it is not ejected automatically upon reboot. After your computer's normal power-up sequence has completed, Red Hat Enterprise Linux loads and starts. By default, the start process is hidden behind a graphical screen that displays a progress bar. Eventually, a GUI login screen (or if the X Window System is not installed, a login: prompt) appears. If your system was installed with the X Window System during this installation process, the first time you start your Red Hat Enterprise Linux system, applications to set up your system are launched. These applications guide you through initial configuration of Red Hat Enterprise Linux and allow you to set your system time and date, register your machine with Red Hat Network, and more. See Chapter 30, Initial Setup for information about the configuration process. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-complete-ppc |
Chapter 5. EgressQoS [k8s.ovn.org/v1] | Chapter 5. EgressQoS [k8s.ovn.org/v1] Description EgressQoS is a CRD that allows the user to define a DSCP value for pods egress traffic on its namespace to specified CIDRs. Traffic from these pods will be checked against each EgressQoSRule in the namespace's EgressQoS, and if there is a match the traffic is marked with the relevant DSCP value. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object EgressQoSSpec defines the desired state of EgressQoS status object EgressQoSStatus defines the observed state of EgressQoS 5.1.1. .spec Description EgressQoSSpec defines the desired state of EgressQoS Type object Required egress Property Type Description egress array a collection of Egress QoS rule objects egress[] object 5.1.2. .spec.egress Description a collection of Egress QoS rule objects Type array 5.1.3. .spec.egress[] Description Type object Required dscp Property Type Description dscp integer DSCP marking value for matching pods' traffic. dstCIDR string DstCIDR specifies the destination's CIDR. Only traffic heading to this CIDR will be marked with the DSCP value. This field is optional, and in case it is not set the rule is applied to all egress traffic regardless of the destination. podSelector object PodSelector applies the QoS rule only to the pods in the namespace whose label matches this definition. This field is optional, and in case it is not set results in the rule being applied to all pods in the namespace. 5.1.4. .spec.egress[].podSelector Description PodSelector applies the QoS rule only to the pods in the namespace whose label matches this definition. This field is optional, and in case it is not set results in the rule being applied to all pods in the namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 5.1.5. .spec.egress[].podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 5.1.6. .spec.egress[].podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 5.1.7. .status Description EgressQoSStatus defines the observed state of EgressQoS Type object 5.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressqoses GET : list objects of kind EgressQoS /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses DELETE : delete collection of EgressQoS GET : list objects of kind EgressQoS POST : create an EgressQoS /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses/{name} DELETE : delete an EgressQoS GET : read the specified EgressQoS PATCH : partially update the specified EgressQoS PUT : replace the specified EgressQoS /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses/{name}/status GET : read status of the specified EgressQoS PATCH : partially update status of the specified EgressQoS PUT : replace status of the specified EgressQoS 5.2.1. /apis/k8s.ovn.org/v1/egressqoses Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind EgressQoS Table 5.2. HTTP responses HTTP code Reponse body 200 - OK EgressQoSList schema 401 - Unauthorized Empty 5.2.2. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses Table 5.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of EgressQoS Table 5.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressQoS Table 5.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.8. HTTP responses HTTP code Reponse body 200 - OK EgressQoSList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressQoS Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.10. Body parameters Parameter Type Description body EgressQoS schema Table 5.11. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 201 - Created EgressQoS schema 202 - Accepted EgressQoS schema 401 - Unauthorized Empty 5.2.3. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses/{name} Table 5.12. Global path parameters Parameter Type Description name string name of the EgressQoS namespace string object name and auth scope, such as for teams and projects Table 5.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an EgressQoS Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.15. Body parameters Parameter Type Description body DeleteOptions schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressQoS Table 5.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.18. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressQoS Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body Patch schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressQoS Table 5.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.23. Body parameters Parameter Type Description body EgressQoS schema Table 5.24. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 201 - Created EgressQoS schema 401 - Unauthorized Empty 5.2.4. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses/{name}/status Table 5.25. Global path parameters Parameter Type Description name string name of the EgressQoS namespace string object name and auth scope, such as for teams and projects Table 5.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified EgressQoS Table 5.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.28. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified EgressQoS Table 5.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.30. Body parameters Parameter Type Description body Patch schema Table 5.31. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified EgressQoS Table 5.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.33. Body parameters Parameter Type Description body EgressQoS schema Table 5.34. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 201 - Created EgressQoS schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_apis/egressqos-k8s-ovn-org-v1 |
1.4. Certificate Life Cycle | 1.4. Certificate Life Cycle Certificates are used in many applications, from encrypting email to accessing websites. There are two major stages in the lifecycle of the certificate: the point when it is issued (issuance and enrollment) and the period when the certificates are no longer valid (renewal or revocation). There are also ways to manage the certificate during its cycle. Making information about the certificate available to other applications is publishing the certificate and then backing up the key pairs so that the certificate can be recovered if it is lost. 1.4.1. Certificate Issuance The process for issuing a certificate depends on the CA that issues it and the purpose for which it will be used. Issuing non-digital forms of identification varies in similar ways. The requirements to get a library card are different than the ones to get a driver's license. Similarly, different CAs have different procedures for issuing different kinds of certificates. Requirements for receiving a certificate can be as simple as an email address or user name and password to notarized documents, a background check, and a personal interview. Depending on an organization's policies, the process of issuing certificates can range from being completely transparent for the user to requiring significant user participation and complex procedures. In general, processes for issuing certificates should be flexible, so organizations can tailor them to their changing needs. 1.4.2. Certificate Expiration and Renewal Like a driver's license, a certificate specifies a period of time during which it is valid. Attempts to use a certificate for authentication before or after its validity period will fail. Managing certificate expirations and renewals are an essential part of the certificate management strategy. For example, an administrator may wish to be notified automatically when a certificate is about to expire so that an appropriate renewal process can be completed without disrupting the system operation. The renewal process may involve reusing the same public-private key pair or issuing a new one. Additionally, it may be necessary to revoke a certificate before it has expired, such as when an employee leaves a company or moves to a new job in a different unit within the company. Certificate revocation can be handled in several different ways: Verify if the certificate is present in the directory Servers can be configured so that the authentication process checks the directory for the presence of the certificate being presented. When an administrator revokes a certificate, the certificate can be automatically removed from the directory, and subsequent authentication attempts with that certificate will fail, even though the certificate remains valid in every other respect. Certificate revocation list (CRL) A list of revoked certificates, a CRL, can be published to the directory at regular intervals. The CRL can be checked as part of the authentication process. Real-time status checking The issuing CA can also be checked directly each time a certificate is presented for authentication. This procedure is sometimes called real-time status checking. Online Certificate Status Protocol The Online Certificate Status Protocol (OCSP) service can be configured to determine the status of certificates. For more information about renewing certificates, see Section 2.4.2, "Renewing Certificates" . For more information about revoking certificates, including CRLs and OCSP, see Section 2.4.4, "Revoking Certificates and Checking Status" . | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/cert-lifecycle |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_hybrid_and_multicloud_resources/making-open-source-more-inclusive |
Chapter 5. Configuring an active/passive Apache HTTP server in a Red Hat High Availability cluster | Chapter 5. Configuring an active/passive Apache HTTP server in a Red Hat High Availability cluster Configure an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster with the following procedure. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption. The following illustration shows a high-level overview of the cluster in which the cluster is a two-node Red Hat High Availability cluster which is configured with a network power switch and with shared storage. The cluster nodes are connected to a public network, for client access to the Apache HTTP server through a virtual IP. The Apache server runs on either Node 1 or Node 2, each of which has access to the storage on which the Apache data is kept. In this illustration, the web server is running on Node 1 while Node 2 is available to run the server if Node 1 becomes inoperative. Figure 5.1. Apache in a Red Hat High Availability Two-Node Cluster This use case requires that your system include the following components: A two-node Red Hat High Availability cluster with power fencing configured for each node. We recommend but do not require a private network. This procedure uses the cluster example provided in Creating a Red Hat High-Availability cluster with Pacemaker . A public virtual IP address, required for Apache. Shared storage for the nodes in the cluster, using iSCSI, Fibre Channel, or other shared network block device. The cluster is configured with an Apache resource group, which contains the cluster components that the web server requires: an LVM resource, a file system resource, an IP address resource, and a web server resource. This resource group can fail over from one node of the cluster to the other, allowing either node to run the web server. Before creating the resource group for this cluster, you will be performing the following procedures: Configure an XFS file system on the logical volume my_lv . Configure a web server. After performing these steps, you create the resource group and the resources it contains. 5.1. Configuring an LVM volume with an XFS file system in a Pacemaker cluster Create an LVM logical volume on storage that is shared between the nodes of the cluster with the following procedure. Note LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only. The following procedure creates an LVM logical volume and then creates an XFS file system on that volume for use in a Pacemaker cluster. In this example, the shared partition /dev/sdb1 is used to store the LVM physical volume from which the LVM logical volume will be created. Procedure On both nodes of the cluster, perform the following steps to set the value for the LVM system ID to the value of the uname identifier for the system. The LVM system ID will be used to ensure that only the cluster is capable of activating the volume group. Set the system_id_source configuration option in the /etc/lvm/lvm.conf configuration file to uname . Verify that the LVM system ID on the node matches the uname for the node. Create the LVM volume and create an XFS file system on that volume. Since the /dev/sdb1 partition is storage that is shared, you perform this part of the procedure on one node only. Note If your LVM volume group contains one or more physical volumes that reside on remote block storage, such as an iSCSI target, Red Hat recommends that you ensure that the service starts before Pacemaker starts. For information about configuring startup order for a remote physical volume used by a Pacemaker cluster, see Configuring startup order for resource dependencies not managed by Pacemaker . Create the volume group my_vg that consists of the physical volume /dev/sdb1 . Specify the --setautoactivation n flag to ensure that volume groups managed by Pacemaker in a cluster will not be automatically activated on startup. If you are using an existing volume group for the LVM volume you are creating, you can reset this flag with the vgchange --setautoactivation n command for the volume group. Verify that the new volume group has the system ID of the node on which you are running and from which you created the volume group. Create a logical volume using the volume group my_vg . You can use the lvs command to display the logical volume. Create an XFS file system on the logical volume my_lv . If the use of a devices file is enabled with the use_devicesfile = 1 parameter in the lvm.conf file, add the shared device to the devices file on the second node in the cluster. This feature is enabled by default. 5.2. Configuring an Apache HTTP Server Configure an Apache HTTP Server with the following procedure. Procedure Ensure that the Apache HTTP Server is installed on each node in the cluster. You also need the wget tool installed on the cluster to be able to check the status of the Apache HTTP Server. On each node, execute the following command. If you are running the firewalld daemon, on each node in the cluster enable the ports that are required by the Red Hat High Availability Add-On and enable the ports you will require for running httpd . This example enables the httpd ports for public access, but the specific ports to enable for httpd may vary for production use. In order for the Apache resource agent to get the status of Apache, on each node in the cluster create the following addition to the existing configuration to enable the status server URL. Create a web page for Apache to serve up. On one node in the cluster, ensure that the logical volume you created in Configuring an LVM volume with an XFS file system is activated, mount the file system that you created on that logical volume, create the file index.html on that file system, and then unmount the file system. 5.3. Creating the resources and resource groups Create the resources for your cluster with the following procedure. To ensure these resources all run on the same node, they are configured as part of the resource group apachegroup . The resources to create are as follows, listed in the order in which they will start. An LVM-activate resource named my_lvm that uses the LVM volume group you created in Configuring an LVM volume with an XFS file system . A Filesystem resource named my_fs , that uses the file system device /dev/my_vg/my_lv you created in Configuring an LVM volume with an XFS file system . An IPaddr2 resource, which is a floating IP address for the apachegroup resource group. The IP address must not be one already associated with a physical node. If the IPaddr2 resource's NIC device is not specified, the floating IP must reside on the same network as one of the node's statically assigned IP addresses, otherwise the NIC device to assign the floating IP address cannot be properly detected. An apache resource named Website that uses the index.html file and the Apache configuration you defined in Configuring an Apache HTTP server . The following procedure creates the resource group apachegroup and the resources that the group contains. The resources will start in the order in which you add them to the group, and they will stop in the reverse order in which they are added to the group. Run this procedure from one node of the cluster only. Procedure The following command creates the LVM-activate resource my_lvm . Because the resource group apachegroup does not yet exist, this command creates the resource group. Note Do not configure more than one LVM-activate resource that uses the same LVM volume group in an active/passive HA configuration, as this could cause data corruption. Additionally, do not configure an LVM-activate resource as a clone resource in an active/passive HA configuration. When you create a resource, the resource is started automatically. You can use the following command to confirm that the resource was created and has started. You can manually stop and start an individual resource with the pcs resource disable and pcs resource enable commands. The following commands create the remaining resources for the configuration, adding them to the existing resource group apachegroup . After creating the resources and the resource group that contains them, you can check the status of the cluster. Note that all four resources are running on the same node. Note that if you have not configured a fencing device for your cluster, by default the resources do not start. Once the cluster is up and running, you can point a browser to the IP address you defined as the IPaddr2 resource to view the sample display, consisting of the simple word "Hello". If you find that the resources you configured are not running, you can run the pcs resource debug-start resource command to test the resource configuration. When you use the apache resource agent to manage Apache, it does not use systemd . Because of this, you must edit the logrotate script supplied with Apache so that it does not use systemctl to reload Apache. Remove the following line in the /etc/logrotate.d/httpd file on each node in the cluster. Replace the line you removed with the following three lines, specifying /var/run/httpd- website .pid as the PID file path where website is the name of the Apache resource. In this example, the Apache resource name is Website . 5.4. Testing the resource configuration Test the resource configuration in a cluster with the following procedure. In the cluster status display shown in Creating the resources and resource groups , all of the resources are running on node z1.example.com . You can test whether the resource group fails over to node z2.example.com by using the following procedure to put the first node in standby mode, after which the node will no longer be able to host resources. Procedure The following command puts node z1.example.com in standby mode. After putting node z1 in standby mode, check the cluster status. Note that the resources should now all be running on z2 . The web site at the defined IP address should still display, without interruption. To remove z1 from standby mode, enter the following command. Note Removing a node from standby mode does not in itself cause the resources to fail back over to that node. This will depend on the resource-stickiness value for the resources. For information about the resource-stickiness meta attribute, see Configuring a resource to prefer its current node . | [
"Configuration option global/system_id_source. system_id_source = \"uname\"",
"lvm systemid system ID: z1.example.com uname -n z1.example.com",
"pvcreate /dev/sdb1 Physical volume \"/dev/sdb1\" successfully created",
"vgcreate --setautoactivation n my_vg /dev/sdb1 Volume group \"my_vg\" successfully created",
"vgs -o+systemid VG #PV #LV #SN Attr VSize VFree System ID my_vg 1 0 0 wz--n- <1.82t <1.82t z1.example.com",
"lvcreate -L450 -n my_lv my_vg Rounding up size to full physical extent 452.00 MiB Logical volume \"my_lv\" created",
"lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv my_vg -wi-a---- 452.00m",
"mkfs.xfs /dev/my_vg/my_lv meta-data=/dev/my_vg/my_lv isize=512 agcount=4, agsize=28928 blks = sectsz=512 attr=2, projid32bit=1",
"lvmdevices --adddev /dev/sdb1",
"dnf install -y httpd wget",
"firewall-cmd --permanent --add-service=http firewall-cmd --permanent --zone=public --add-service=http firewall-cmd --reload",
"cat <<-END > /etc/httpd/conf.d/status.conf <Location /server-status> SetHandler server-status Require local </Location> END",
"lvchange -ay my_vg/my_lv mount /dev/my_vg/my_lv /var/www/ mkdir /var/www/html mkdir /var/www/cgi-bin mkdir /var/www/error restorecon -R /var/www cat <<-END >/var/www/html/index.html <html> <body>Hello</body> </html> END umount /var/www",
"pcs resource create my_lvm ocf:heartbeat:LVM-activate vgname=my_vg vg_access_mode=system_id --group apachegroup",
"pcs resource status Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started",
"pcs resource create my_fs Filesystem device=\"/dev/my_vg/my_lv\" directory=\"/var/www\" fstype=\"xfs\" --group apachegroup pcs resource create VirtualIP IPaddr2 ip=198.51.100.3 cidr_netmask=24 --group apachegroup pcs resource create Website apache configfile=\"/etc/httpd/conf/httpd.conf\" statusurl=\"http://127.0.0.1/server-status\" --group apachegroup",
"pcs status Cluster name: my_cluster Last updated: Wed Jul 31 16:38:51 2013 Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started z1.example.com my_fs (ocf::heartbeat:Filesystem): Started z1.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z1.example.com Website (ocf::heartbeat:apache): Started z1.example.com",
"Hello",
"/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true",
"/usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c \"PidFile /var/run/httpd-Website.pid\" -k graceful > /dev/null 2>/dev/null || true",
"pcs node standby z1.example.com",
"pcs status Cluster name: my_cluster Last updated: Wed Jul 31 17:16:17 2013 Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Node z1.example.com (1): standby Online: [ z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started z2.example.com my_fs (ocf::heartbeat:Filesystem): Started z2.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z2.example.com Website (ocf::heartbeat:apache): Started z2.example.com",
"pcs node unstandby z1.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_configuring-active-passive-http-server-in-a-cluster-configuring-and-managing-high-availability-clusters |
3.6. Configuring Cluster Members | 3.6. Configuring Cluster Members Configuring cluster members consists of initially configuring nodes in a newly configured cluster, adding members, and deleting members. The following sections provide procedures for initial configuration of nodes, adding nodes, and deleting nodes: Section 3.6.1, "Initially Configuring Members" Section 3.6.2, "Adding a Member to a Running Cluster" Section 3.6.3, "Deleting a Member from a Cluster" 3.6.1. Initially Configuring Members Creating a cluster consists of selecting a set of nodes (or members) to be part of the cluster. Once you have completed the initial step of creating a cluster and creating fence devices, you need to configure cluster nodes. To initially configure cluster nodes after creating a new cluster, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from Choose a cluster to administer displayed on the cluster tab. At the detailed menu for the cluster (below the clusters menu), click Nodes . Clicking Nodes causes the display of an Add a Node element and a Configure element with a list of the nodes already configured in the cluster. Click a link for a node at either the list in the center of the page or in the list in the detailed menu under the clusters menu. Clicking a link for a node causes a page to be displayed for that link showing how that node is configured. At the bottom of the page, under Main Fencing Method , click Add a fence device to this level . Select a fence device and provide parameters for the fence device (for example port number). Note You can choose from an existing fence device or create a new fence device. Click Update main fence properties and wait for the change to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-config-member-conga-ca |
5.5.2.2. Adding a Member to a Running DLM Cluster That Contains More Than Two Nodes | 5.5.2.2. Adding a Member to a Running DLM Cluster That Contains More Than Two Nodes To add a member to an existing DLM cluster that is currently in operation, and contains more than two nodes, follow these steps: Add the node and configure fencing for it as in Section 5.5.1, "Adding a Member to a New Cluster" . Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node. Start cluster services on the new node by running the following commands in this order: service ccsd start service cman start service fenced start service clvmd start , if CLVM has been used to create clustered volumes service gfs start , if you are using Red Hat GFS service rgmanager start , if the cluster is running high-availability services ( rgmanager ) Start system-config-cluster (refer to Section 5.2, "Starting the Cluster Configuration Tool " ). At the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected. Note Make sure to configure other parameters that may be affected by changes in this section. Refer to Section 5.1, "Configuration Tasks" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s3-add-member-running-more-than-2nodes-CA |
Chapter 9. Planning your OVS-DPDK deployment | Chapter 9. Planning your OVS-DPDK deployment To optimize your Open vSwitch with Data Plane Development Kit (OVS-DPDK) deployment for NFV, you should understand how OVS-DPDK uses the Compute node hardware (CPU, NUMA nodes, memory, NICs) and the considerations for determining the individual OVS-DPDK parameters based on your Compute node. Important When using OVS-DPDK and the OVS native firewall (a stateful firewall based on conntrack), you can track only packets that use ICMPv4, ICMPv6, TCP, and UDP protocols. OVS marks all other types of network traffic as invalid. Important Red Hat does not support the use of OVS-DPDK for non-NFV workloads. If you need OVS-DPDK functionality for non-NFV workloads, contact your Technical Account Manager (TAM) or open a customer service request case to discuss a Support Exception and other options. To open a customer service request case, go to Create a case and choose Account > Customer Service Request . 9.1. OVS-DPDK with CPU partitioning and NUMA topology OVS-DPDK partitions the hardware resources for host, guests, and itself. The OVS-DPDK Poll Mode Drivers (PMDs) run DPDK active loops, which require dedicated CPU cores. Therefore you must allocate some CPUs, and huge pages, to OVS-DPDK. A sample partitioning includes 16 cores per NUMA node on dual-socket Compute nodes. The traffic requires additional NICs because you cannot share NICs between the host and OVS-DPDK. Figure 9.1. NUMA topology: OVS-DPDK with CPU partitioning Note You must reserve DPDK PMD threads on both NUMA nodes, even if a NUMA node does not have an associated DPDK NIC. For optimum OVS-DPDK performance, reserve a block of memory local to the NUMA node. Choose NICs associated with the same NUMA node that you use for memory and CPU pinning. Ensure that both bonded interfaces are from NICs on the same NUMA node. 9.2. OVS-DPDK parameters This section describes how OVS-DPDK uses parameters within the director network_environment.yaml heat templates to configure the CPU and memory for optimum performance. Use this information to evaluate the hardware support on your Compute nodes and how to partition the hardware to optimize your OVS-DPDK deployment. Note Always pair CPU sibling threads, or logical CPUs, together in the physical core when allocating CPU cores. For details on how to determine the CPU and NUMA nodes on your Compute nodes, see Discovering your NUMA node topology . Use this information to map CPU and other parameters to support the host, guest instance, and OVS-DPDK process needs. 9.2.1. CPU parameters OVS-DPDK uses the following parameters for CPU partitioning: OvsPmdCoreList Provides the CPU cores that are used for the DPDK poll mode drivers (PMD). Choose CPU cores that are associated with the local NUMA nodes of the DPDK interfaces. Use OvsPmdCoreList for the pmd-cpu-mask value in OVS. Use the following recommendations for OvsPmdCoreList : Pair the sibling threads together. Performance depends on the number of physical cores allocated for this PMD Core list. On the NUMA node which is associated with DPDK NIC, allocate the required cores. For NUMA nodes with a DPDK NIC, determine the number of physical cores required based on the performance requirement, and include all the sibling threads or logical CPUs for each physical core. For NUMA nodes without DPDK NICs, allocate the sibling threads or logical CPUs of any physical core except the first physical core of the NUMA node. Note You must reserve DPDK PMD threads on both NUMA nodes, even if a NUMA node does not have an associated DPDK NIC. NovaComputeCpuDedicatedSet A comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled. For example, NovaComputeCpuDedicatedSet: [4-12,^8,15] reserves cores from 4-12 and 15, excluding 8. Exclude all cores from the OvsPmdCoreList . Include all remaining cores. Pair the sibling threads together. NovaComputeCpuSharedSet A comma-separated list or range of physical host CPU numbers used to determine the host CPUs for instance emulator threads. IsolCpusList A set of CPU cores isolated from the host processes. IsolCpusList is the isolated_cores value in the cpu-partitioning-variable.conf file for the tuned-profiles-cpu-partitioning component. Use the following recommendations for IsolCpusList : Match the list of cores in OvsPmdCoreList and NovaComputeCpuDedicatedSet . Pair the sibling threads together. DerivePciWhitelistEnabled To reserve virtual functions (VF) for VMs, use the NovaPCIPassthrough parameter to create a list of VFs passed through to Nova. VFs excluded from the list remain available for the host. For each VF in the list, populate the address parameter with a regular expression that resolves to the address value. The following is an example of the manual list creation process. If NIC partitioning is enabled in a device named eno2 , list the PCI addresses of the VFs with the following command: In this case, the VFs 0, 4, and 6 are used by eno2 for NIC Partitioning. Manually configure NovaPCIPassthrough to include VFs 1-3, 5, and 7, and consequently exclude VFs 0,4, and 6, as in the following example: 9.2.2. Memory parameters OVS-DPDK uses the following memory parameters: OvsDpdkMemoryChannels Maps memory channels in the CPU per NUMA node. OvsDpdkMemoryChannels is the other_config:dpdk-extra="-n <value>" value in OVS. Observe the following recommendations for OvsDpdkMemoryChannels : Use dmidecode -t memory or your hardware manual to determine the number of memory channels available. Use ls /sys/devices/system/node/node* -d to determine the number of NUMA nodes. Divide the number of memory channels available by the number of NUMA nodes. NovaReservedHostMemory Reserves memory in MB for tasks on the host. NovaReservedHostMemory is the reserved_host_memory_mb value for the Compute node in nova.conf . Observe the following recommendation for NovaReservedHostMemory : Use the static recommended value of 4096 MB. OvsDpdkSocketMemory Specifies the amount of memory in MB to pre-allocate from the hugepage pool, per NUMA node. OvsDpdkSocketMemory is the other_config:dpdk-socket-mem value in OVS. Observe the following recommendations for OvsDpdkSocketMemory : Provide as a comma-separated list. For a NUMA node without a DPDK NIC, use the static recommendation of 1024 MB (1GB) Calculate the OvsDpdkSocketMemory value from the MTU value of each NIC on the NUMA node. The following equation approximates the value for OvsDpdkSocketMemory : MEMORY_REQD_PER_MTU = (ROUNDUP_PER_MTU + 800) * (4096 * 64) Bytes 800 is the overhead value. 4096 * 64 is the number of packets in the mempool. Add the MEMORY_REQD_PER_MTU for each of the MTU values set on the NUMA node and add another 512 MB as buffer. Round the value up to a multiple of 1024. Sample Calculation - MTU 2000 and MTU 9000 DPDK NICs dpdk0 and dpdk1 are on the same NUMA node 0, and configured with MTUs 9000, and 2000 respectively. The sample calculation to derive the memory required is as follows: Round off the MTU values to the nearest multiple of 1024 bytes. Calculate the required memory for each MTU value based on these rounded byte values. Calculate the combined total memory required, in bytes. This calculation represents (Memory required for MTU of 9000) + (Memory required for MTU of 2000) + (512 MB buffer). Convert the total memory required into MB. Round this value up to the nearest 1024. Use this value to set OvsDpdkSocketMemory . OvsDpdkSocketMemory: "4096,1024" Sample Calculation - MTU 2000 DPDK NICs dpdk0 and dpdk1 are on the same NUMA node 0, and each are configured with MTUs of 2000. The sample calculation to derive the memory required is as follows: Round off the MTU values to the nearest multiple of 1024 bytes. Calculate the required memory for each MTU value based on these rounded byte values. Calculate the combined total memory required, in bytes. This calculation represents (Memory required for MTU of 2000) + (512 MB buffer). Convert the total memory required into MB. Round this value up to the nearest multiple of 1024. Use this value to set OvsDpdkSocketMemory . OvsDpdkSocketMemory: "2048,1024" 9.2.3. Networking parameters OvsDpdkDriverType Sets the driver type used by DPDK. Use the default value of vfio-pci . NeutronDatapathType Datapath type for OVS bridges. DPDK uses the default value of netdev . NeutronVhostuserSocketDir Sets the vhost-user socket directory for OVS. Use /var/lib/vhost_sockets for vhost client mode. 9.2.4. Other parameters NovaSchedulerEnabledFilters Provides an ordered list of filters that the Compute node uses to find a matching Compute node for a requested guest instance. VhostuserSocketGroup Sets the vhost-user socket directory group. The default value is qemu . Set VhostuserSocketGroup to hugetlbfs so that the ovs-vswitchd and qemu processes can access the shared huge pages and unix socket that configures the virtio-net device. This value is role-specific and should be applied to any role leveraging OVS-DPDK. Important To use the parameter VhostuserSocketGroup you must also set NeutronVhostuserSocketDir . For more information, see Section 9.2.3, "Networking parameters" . KernelArgs Provides multiple kernel arguments to /etc/default/grub for the Compute node at boot time. Add the following values based on your configuration: hugepagesz : Sets the size of the huge pages on a CPU. This value can vary depending on the CPU hardware. Set to 1G for OVS-DPDK deployments ( default_hugepagesz=1GB hugepagesz=1G ). Use this command to check for the pdpe1gb CPU flag that confirms your CPU supports 1G. hugepages count : Sets the number of huge pages available based on available host memory. Use most of your available memory, except NovaReservedHostMemory . You must also configure the huge pages count value within the flavor of your Compute nodes. iommu : For Intel CPUs, add "intel_iommu=on iommu=pt" isolcpus : Sets the CPU cores for tuning. This value matches IsolCpusList . For more information about CPU isolation, see the Red Hat Knowledgebase solution OpenStack CPU isolation guidance for RHEL 8 and RHEL 9 . DdpPackage Configures Dynamic Device Personalization (DDP), to apply a profile package to a device at deployment to change the packet processing pipeline of the device. Add the following lines to your network_environment.yaml template to include the DDP package: parameter_defaults: ComputeOvsDpdkSriovParameters: DdpPackage: "ddp-comms" OvsDpdkExtra Enables you to pass additional configuration parameters with the other_config:dpdk-extra parameter. It is used for environments that use NIC partitioning with NVIDIA Mellanox cards to avoid connectivity issues. For these use cases, set OvsDpdkExtra to -a 0000:00:00.0 which causes the allow list of PCI addresses to allow no addresses. Example 9.2.5. VM instance flavor specifications Before deploying VM instances in an NFV environment, create a flavor that utilizes CPU pinning, huge pages, and emulator thread pinning. hw:cpu_policy When this parameter is set to dedicated , the guest uses pinned CPUs. Instances created from a flavor with this parameter set have an effective overcommit ratio of 1:1. The default value is shared . hw:mem_page_size Set this parameter to a valid string of a specific value with standard suffix (For example, 4KB , 8MB , or 1GB ). Use 1GB to match the hugepagesz boot parameter. Calculate the number of huge pages available for the virtual machines by subtracting OvsDpdkSocketMemory from the boot parameter. The following values are also valid: small (default) - The smallest page size is used large - Only use large page sizes. (2MB or 1GB on x86 architectures) any - The compute driver can attempt to use large pages, but defaults to small if none available. hw:emulator_threads_policy Set the value of this parameter to share so that emulator threads are locked to CPUs that you've identified in the heat parameter, NovaComputeCpuSharedSet . If an emulator thread is running on a vCPU with the poll mode driver (PMD) or real-time processing, you can experience negative effects, such as packet loss. 9.3. Saving power in OVS-DPDK deployments A power save profile, cpu-partitioning-powersave , has been introduced in Red Hat Enterprise Linux 9 (RHEL 9), and is now available in Red Hat OpenStack Platform (RHOSP) 17.1.3. This TuneD profile is the base building block to save power in RHOSP 17.1 NFV environments. Prerequisites Access to the undercloud host and credentials for the stack user. The CPUs on which you want to achieve power savings are enabled to allow higher C-states. For more information, see the max_power_state option in the man page for tuned-profiles-cpu-partitioning(7) . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Create an Ansible playbook YAML file, for example, /home/stack/cli-overcloud-tuned-maxpower-conf.yaml . Add the following configuration to your cli-overcloud-tuned-maxpower-conf.yaml file: cat <<EOF > /home/stack/cli-overcloud-tuned-maxpower-conf.yaml {% raw %} --- #/home/stack/cli-overcloud-tuned-maxpower-conf.yaml - name: Overcloud Node set tuned power state hosts: compute-0 compute-1 any_errors_fatal: true gather_facts: false pre_tasks: - name: Wait for provisioned nodes to boot wait_for_connection: timeout: 600 delay: 10 connection: local tasks: - name: Check the max power state for this system become: true block: - name: Get power states shell: "for s in /sys/devices/system/cpu/cpu2/cpuidle/*; do grep . USDs/{name,latency}; done" register: _list_of_power_states - name: Print available power states debug: msg: "{{ _list_of_power_states.stdout.split('\n') }}" - name: Check for active tuned power-save profile stat: path: "/etc/tuned/active_profile" register: _active_profile - name: Check the profile slurp: path: "/etc/tuned/active_profile" when: _active_profile.stat.exists register: _active_profile_name - name: Print states debug: var: (_active_profile_name.content|b64decode|string) - name: Check the max power state for this system block: - name: Check if the cstate config is present in the conf file lineinfile: dest: /etc/tuned/cpu-partitioning-powersave-variables.conf regexp: '^max_power_state' line: 'max_power_state=cstate.name:C6' register: _cstate_entry_check {% endraw %} EOF Add the power save profile to your roles data file. For more information, see 10.2. Generating roles and image files . Add the cli-overcloud-tuned-maxpower-conf.yaml playbook to your bare metal nodes definition file. For more information, see 10.5. Creating a bare metal nodes definition file . Ensure that you have set queue size in your NIC configuration template. For more information, see 10.6. Creating a NIC configuration template . Additional resources force_latency 9.4. Two NUMA node example OVS-DPDK deployment The Compute node in the following example includes two NUMA nodes: NUMA 0 has cores 0-7. The sibling thread pairs are (0,1), (2,3), (4,5), and (6,7) NUMA 1 has cores 8-15. The sibling thread pairs are (8,9), (10,11), (12,13), and (14,15). Each NUMA node connects to a physical NIC, namely NIC1 on NUMA 0, and NIC2 on NUMA 1. Figure 9.2. OVS-DPDK: two NUMA nodes example Note Reserve the first physical cores or both thread pairs on each NUMA node (0,1 and 8,9) for non-datapath DPDK processes. This example also assumes a 1500 MTU configuration, so the OvsDpdkSocketMemory is the same for all use cases: OvsDpdkSocketMemory: "1024,1024" NIC 1 for DPDK, with one physical core for PMD In this use case, you allocate one physical core on NUMA 0 for PMD. You must also allocate one physical core on NUMA 1, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are: OvsPmdCoreList: "2,3,10,11" NovaComputeCpuDedicatedSet: "4,5,6,7,12,13,14,15" NIC 1 for DPDK, with two physical cores for PMD In this use case, you allocate two physical cores on NUMA 0 for PMD. You must also allocate one physical core on NUMA 1, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are: OvsPmdCoreList: "2,3,4,5,10,11" NovaComputeCpuDedicatedSet: "6,7,12,13,14,15" NIC 2 for DPDK, with one physical core for PMD In this use case, you allocate one physical core on NUMA 1 for PMD. You must also allocate one physical core on NUMA 0, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are: OvsPmdCoreList: "2,3,10,11" NovaComputeCpuDedicatedSet: "4,5,6,7,12,13,14,15" NIC 2 for DPDK, with two physical cores for PMD In this use case, you allocate two physical cores on NUMA 1 for PMD. You must also allocate one physical core on NUMA 0, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are: OvsPmdCoreList: "2,3,10,11,12,13" NovaComputeCpuDedicatedSet: "4,5,6,7,14,15" NIC 1 and NIC2 for DPDK, with two physical cores for PMD In this use case, you allocate two physical cores on each NUMA node for PMD. The remaining cores are allocated for guest instances. The resulting parameter settings are: OvsPmdCoreList: "2,3,4,5,10,11,12,13" NovaComputeCpuDedicatedSet: "6,7,14,15" 9.5. Topology of an NFV OVS-DPDK deployment This example deployment shows an OVS-DPDK configuration and consists of two virtual network functions (VNFs) with two interfaces each: The management interface, represented by mgt . The data plane interface. In the OVS-DPDK deployment, the VNFs operate with inbuilt DPDK that supports the physical interface. OVS-DPDK enables bonding at the vSwitch level. For improved performance in your OVS-DPDK deployment, it is recommended that you separate kernel and OVS-DPDK NICs. To separate the management ( mgt ) network, connected to the Base provider network for the virtual machine, ensure you have additional NICs. The Compute node consists of two regular NICs for the Red Hat OpenStack Platform API management that can be reused by the Ceph API but cannot be shared with any OpenStack project. Figure 9.3. Compute node: NFV OVS-DPDK NFV OVS-DPDK topology The following image shows the topology for OVS-DPDK for NFV. It consists of Compute and Controller nodes with 1 or 10 Gbps NICs, and the director node. Figure 9.4. NFV topology: OVS-DPDK | [
"[tripleo-admin@compute-0 ~]USD ls -lh /sys/class/net/eno2/device/ | grep virtfn lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn0 -> ../0000:18:06.0 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn1 -> ../0000:18:06.1 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn2 -> ../0000:18:06.2 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn3 -> ../0000:18:06.3 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn4 -> ../0000:18:06.4 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn5 -> ../0000:18:06.5 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn6 -> ../0000:18:06.6 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn7 -> ../0000:18:06.7",
"NovaPCIPassthrough: - physical_network: \"sriovnet2\" address: {\"domain\": \".*\", \"bus\": \"18\", \"slot\": \"06\", \"function\": \"[1-3]\"} - physical_network: \"sriovnet2\" address: {\"domain\": \".*\", \"bus\": \"18\", \"slot\": \"06\", \"function\": \"[5]\"} - physical_network: \"sriovnet2\" address: {\"domain\": \".*\", \"bus\": \"18\", \"slot\": \"06\", \"function\": \"[7]\"}",
"The MTU value of 9000 becomes 9216 bytes. The MTU value of 2000 becomes 2048 bytes.",
"Memory required for 9000 MTU = (9216 + 800) * (4096*64) = 2625634304 Memory required for 2000 MTU = (2048 + 800) * (4096*64) = 746586112",
"2625634304 + 746586112 + 536870912 = 3909091328 bytes.",
"3909091328 / (1024*1024) = 3728 MB.",
"3724 MB rounds up to 4096 MB.",
"OvsDpdkSocketMemory: \"4096,1024\"",
"The MTU value of 2000 becomes 2048 bytes.",
"Memory required for 2000 MTU = (2048 + 800) * (4096*64) = 746586112",
"746586112 + 536870912 = 1283457024 bytes.",
"1283457024 / (1024*1024) = 1224 MB.",
"1224 MB rounds up to 2048 MB.",
"OvsDpdkSocketMemory: \"2048,1024\"",
"lshw -class processor | grep pdpe1gb",
"parameter_defaults: ComputeOvsDpdkSriovParameters: DdpPackage: \"ddp-comms\"",
"parameter_defaults: ComputeOvsDpdkSriovParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=48 intel_iommu=on iommu=pt isolcpus=1-11,13-23\" IsolCpusList: \"1-11,13-23\" OvsDpdkSocketMemory: \"4096\" OvsDpdkMemoryChannels: \"4\" OvsDpdkExtra: \"-a 0000:00:00.0\" NovaReservedHostMemory: 4096 OvsPmdCoreList: \"1,13,2,14,3,15\" OvsDpdkCoreList: \"0,12\" NovaComputeCpuDedicatedSet: [ 4-11 , 16-23 ] NovaComputeCpuSharedSet: [ 0 , 12 ]",
"source ~/stackrc",
"cat <<EOF > /home/stack/cli-overcloud-tuned-maxpower-conf.yaml {% raw %} --- #/home/stack/cli-overcloud-tuned-maxpower-conf.yaml - name: Overcloud Node set tuned power state hosts: compute-0 compute-1 any_errors_fatal: true gather_facts: false pre_tasks: - name: Wait for provisioned nodes to boot wait_for_connection: timeout: 600 delay: 10 connection: local tasks: - name: Check the max power state for this system become: true block: - name: Get power states shell: \"for s in /sys/devices/system/cpu/cpu2/cpuidle/*; do grep . USDs/{name,latency}; done\" register: _list_of_power_states - name: Print available power states debug: msg: \"{{ _list_of_power_states.stdout.split('\\n') }}\" - name: Check for active tuned power-save profile stat: path: \"/etc/tuned/active_profile\" register: _active_profile - name: Check the profile slurp: path: \"/etc/tuned/active_profile\" when: _active_profile.stat.exists register: _active_profile_name - name: Print states debug: var: (_active_profile_name.content|b64decode|string) - name: Check the max power state for this system block: - name: Check if the cstate config is present in the conf file lineinfile: dest: /etc/tuned/cpu-partitioning-powersave-variables.conf regexp: '^max_power_state' line: 'max_power_state=cstate.name:C6' register: _cstate_entry_check {% endraw %} EOF",
"OvsDpdkSocketMemory: \"1024,1024\"",
"OvsPmdCoreList: \"2,3,10,11\" NovaComputeCpuDedicatedSet: \"4,5,6,7,12,13,14,15\"",
"OvsPmdCoreList: \"2,3,4,5,10,11\" NovaComputeCpuDedicatedSet: \"6,7,12,13,14,15\"",
"OvsPmdCoreList: \"2,3,10,11\" NovaComputeCpuDedicatedSet: \"4,5,6,7,12,13,14,15\"",
"OvsPmdCoreList: \"2,3,10,11,12,13\" NovaComputeCpuDedicatedSet: \"4,5,6,7,14,15\"",
"OvsPmdCoreList: \"2,3,4,5,10,11,12,13\" NovaComputeCpuDedicatedSet: \"6,7,14,15\""
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_network_functions_virtualization/plan-ovs-dpdk-deploy_rhosp-nfv |
Operators | Operators OpenShift Container Platform 4.7 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml",
"annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"etcd ├── 0.6.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.9.0 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.9.0.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml ├── 0.9.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.9.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml",
"packageName: etcd channels: - name: alpha currentCSV: etcdoperator.v0.9.2 - name: beta currentCSV: etcdoperator.v0.9.0 - name: stable currentCSV: etcdoperator.v0.9.2 defaultChannel: alpha",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 spec: displayName: Example Catalog 3 image: quay.io/example-org/example-catalog:v1 4 priority: -400 5 publisher: Example Org sourceType: grpc 6 updateStrategy: registryPoll: 7 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 8 latestImageRegistryPoll: 2021-08-26T18:46:25Z 9 registryService: 10 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created",
"packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1",
"olm.skipRange: <semver_range>",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100",
"dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccount: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators status: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]",
"apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: name: v1 4 scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9",
"oc create -f <file_name>.yaml",
"/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/",
"/apis/stable.example.com/v1/namespaces/*/crontabs/",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13",
"oc create -f <file_name>.yaml",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2",
"oc apply -f sub.yaml",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2",
"oc apply -f sub.yaml",
"oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV",
"currentCSV: jaeger-operator.v1.8.2",
"oc delete subscription jaeger -n openshift-operators",
"subscription.operators.coreos.com \"jaeger\" deleted",
"oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators",
"clusterserviceversion.operators.coreos.com \"jaeger-operator.v1.8.2\" deleted",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide",
"oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2",
"- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c",
"apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc edit operatorcondition <name>",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" status: conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF",
"kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]",
"kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]",
"apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23",
"apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed",
"opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3",
"podman login <registry>",
"podman push <registry>/<namespace>/test-catalog:latest",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 spec: sourceType: grpc image: <registry>:<port>/<namespace>/redhat-operator-index:v4.7 2 displayName: My Operator Catalog publisher: <publisher_name> 3 updateStrategy: registryPoll: 4 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.7 --tag mirror.example.com/abc/abc-redhat-operator-index:4.7.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc get packagemanifests -n openshift-marketplace",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.7",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.7 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.7 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.7] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"AUTH_TOKEN=USD(curl -sH \"Content-Type: application/json\" -XPOST https://quay.io/cnr/api/v1/users/login -d ' { \"user\": { \"username\": \"'\"<quay_username>\"'\", \"password\": \"'\"<quay_password>\"'\" } }' | jq -r '.token')",
"podman login <registry_host_name>",
"podman login registry.redhat.io",
"oc adm catalog build --appregistry-org redhat-operators \\ 1 --from=registry.redhat.io/openshift4/ose-operator-registry:v4.7 \\ 2 --filter-by-os=\"linux/amd64\" \\ 3 --to=<registry_host_name>:<port>/olm/redhat-operators:v1 \\ 4 [-a USD{REG_CREDS}] \\ 5 [--insecure] \\ 6 [--auth-token \"USD{AUTH_TOKEN}\"] 7",
"INFO[0013] loading Bundles dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v1",
"INFO[0014] directory dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 file=4.2 load=package W1114 19:42:37.876180 34665 builder.go:141] error building database: error loading package into db: fuse-camel-k-operator.v7.5.0 specifies replacement that couldn't be found Uploading ... 244.9kB/s",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"oc adm catalog mirror <registry_host_name>:<port>/olm/redhat-operators:v1 \\ 1 <registry_host_name>:<port> \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6",
"using database path mapping: /:/tmp/190214037 wrote database to /tmp/190214037 using database at: /tmp/190214037/bundles.db 1",
"echo \"select * from related_image where operatorbundle_name like 'clusterlogging.4.3%';\" | sqlite3 -line /tmp/190214037/bundles.db 1",
"image = registry.redhat.io/openshift-logging/kibana6-rhel8@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61 operatorbundle_name = clusterlogging.4.3.33-202008111029.p0 image = registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506 operatorbundle_name = clusterlogging.4.3.33-202008111029.p0",
"registry.redhat.io/openshift-logging/kibana6-rhel8@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61=<registry_host_name>:<port>/kibana6-rhel8:a767c8f0 registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506=<registry_host_name>:<port>/openshift4-ose-oauth-proxy:3754ea2b",
"oc image mirror [-a USD{REG_CREDS}] --filter-by-os='.*' -f ./manifests-redhat-operators-<random_number>/mapping.txt",
"oc create -f ./manifests-redhat-operators-<random_number>/imageContentSourcePolicy.yaml",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"AUTH_TOKEN=USD(curl -sH \"Content-Type: application/json\" -XPOST https://quay.io/cnr/api/v1/users/login -d ' { \"user\": { \"username\": \"'\"<quay_username>\"'\", \"password\": \"'\"<quay_password>\"'\" } }' | jq -r '.token')",
"podman login <registry_host_name>",
"podman login registry.redhat.io",
"oc adm catalog build --appregistry-org redhat-operators \\ 1 --from=registry.redhat.io/openshift4/ose-operator-registry:v4.7 \\ 2 --filter-by-os=\"linux/amd64\" \\ 3 --to=<registry_host_name>:<port>/olm/redhat-operators:v2 \\ 4 [-a USD{REG_CREDS}] \\ 5 [--insecure] \\ 6 [--auth-token \"USD{AUTH_TOKEN}\"] 7",
"INFO[0013] loading Bundles dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v2",
"oc adm catalog mirror <registry_host_name>:<port>/olm/redhat-operators:v2 \\ 1 <registry_host_name>:<port> \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] 5",
"oc replace -f ./manifests-redhat-operators-<random_number>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc image: <registry_host_name>:<port>/olm/redhat-operators:v2 1 displayName: My Operator Catalog publisher: grpc",
"oc replace -f catalogsource.yaml",
"oc edit catalogsource <catalog_source_name> -n openshift-marketplace",
"podman pull <registry_host_name>:<port>/olm/redhat-operators:v1",
"podman run -p 50051:50051 -it <registry_host_name>:<port>/olm/redhat-operators:v1",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages",
"{ \"name\": \"3scale-operator\" } { \"name\": \"amq-broker\" } { \"name\": \"amq-online\" }",
"grpcurl -plaintext -d '{\"pkgName\":\"kiali-ossm\",\"channelName\":\"stable\"}' localhost:50051 api.Registry/GetBundleForChannel",
"{ \"csvName\": \"kiali-operator.v1.0.7\", \"packageName\": \"kiali-ossm\", \"channelName\": \"stable\",",
"podman inspect --format='{{index .RepoDigests 0}}' <registry_host_name>:<port>/olm/redhat-operators:v1",
"example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: custom-redhat-operators namespace: my-ns spec: sourceType: grpc image: example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 displayName: Red Hat Operators",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: servicemeshoperator namespace: my-ns spec: source: custom-redhat-operators sourceNamespace: my-ns name: servicemeshoperator channel: \"1.0\"",
"podman login <registry>:<port>",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }",
"{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }",
"oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m",
"oc extract secret/pull-secret -n openshift-config --confirm",
"cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson",
"oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"oc get sa -n <tenant_namespace> 1",
"NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1",
"oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"podman login registry.redhat.io",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.7",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.7 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.7 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.7] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"podman login registry.redhat.io",
"podman login <mirror_registry>",
"oc adm catalog mirror <index_image> \\ 1 <mirror_registry>:<port>/<namespace> \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6",
"src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2",
"oc adm catalog mirror <index_image> \\ 1 file:///local/index \\ 2 [-a USD{REG_CREDS}] [--insecure]",
"info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2",
"podman login <mirror_registry>",
"oc adm catalog mirror file://local/index/<repo>/<index_image>:<tag> \\ 1 <mirror_registry>:<port>/<namespace> \\ 2 [-a USD{REG_CREDS}] [--insecure]",
"oc adm catalog mirror <mirror_registry>:<port>/<index_image> <mirror_registry>:<port>/<namespace> --manifests-only \\ 1 [-a USD{REG_CREDS}] [--insecure]",
"manifests-<index_image_name>-<random_number>",
"manifests-index/<namespace>/<index_image_name>-<random_number>",
"oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>:<port>/<namespace>/redhat-operator-index:v4.7 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.7 --tag mirror.example.com/abc/abc-redhat-operator-index:4.7.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc replace -f ./manifests-redhat-operator-index-<random_number>/imageContentSourcePolicy.yaml",
"oc get packagemanifests -n openshift-marketplace",
"tar xvf operator-sdk-v1.3.0-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.3.0-ocp\",",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"runAsUser: 65532",
"runAsNonRoot: true",
"operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"export GO111MODULE=on",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"runAsUser: 65532",
"runAsNonRoot: true",
"domain: example.com layout: go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: 3-alpha plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {}",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})",
"var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })",
"operator-sdk edit --multigroup=true",
"domain: example.com layout: go.kubebuilder.io/v3 multigroup: true",
"operator-sdk create api --group=cache --version=v1 --kind=Memcached",
"Create Resource [y/n] y Create Controller [y/n] y",
"Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go",
"// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" cachev1alpha1 \"github.com/example/memcached-operator/api/v1alpha1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { log := r.Log.WithValues(\"memcached\", req.NamespacedName) // Fetch the Memcached instance memcached := &cachev1alpha1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1alpha1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1alpha1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }",
"import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }",
"// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil",
"import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil",
"// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }",
"make install run",
"2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}",
"FROM gcr.io/distroless/static:nonroot",
"FROM registry.access.redhat.com/ubi8/ubi-minimal:latest",
"gcr.io/kubebuilder/kube-rbac-proxy:<tag>",
"registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk olm status --olm-namespace=openshift-operator-lifecycle-manager",
"operator-sdk run bundle [-n <namespace>] \\ 1 <registry>/<user>/<bundle_image_name>:<tag>",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"make undeploy",
"operator-sdk cleanup <project_name>",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"domain: example.com layout: ansible.sdk.operatorframework.io/v1 projectName: memcached-operator version: 3-alpha",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"--- - name: start memcached community.kubernetes.k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211",
"--- defaults file for Memcached size: 1",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"make install run",
"{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"FROM quay.io/operator-framework/ansible-operator:v1.3.0",
"FROM registry.redhat.io/openshift4/ose-ansible-operator:v4.7",
"gcr.io/kubebuilder/kube-rbac-proxy:<tag>",
"registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk olm status --olm-namespace=openshift-operator-lifecycle-manager",
"operator-sdk run bundle [-n <namespace>] \\ 1 <registry>/<user>/<bundle_image_name>:<tag>",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"make undeploy",
"operator-sdk cleanup <project_name>",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false",
"- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False",
"apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"",
"{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }",
"--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"",
"sudo dnf install ansible",
"pip3 install openshift",
"ansible-galaxy collection install community.kubernetes",
"ansible-galaxy collection install -r requirements.yml",
"--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: default 1 state: \"{{ state }}\" ignore_errors: true 2",
"--- state: present",
"--- - hosts: localhost roles: - <kind>",
"ansible-playbook playbook.yml",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"NAME DATA AGE example-config 0 2m1s",
"ansible-playbook playbook.yml --extra-vars state=absent",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"make install",
"/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"make run",
"/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.3.0\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmaps",
"NAME STATUS AGE example-config Active 3s",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmap",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2",
"{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.3.0\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}",
"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"",
"apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4",
"status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running",
"- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false",
"- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data",
"collections: - operator_sdk.util",
"k8s_status: status: key1: value1",
"mkdir nginx-operator",
"cd nginx-operator",
"operator-sdk init --plugins=helm",
"operator-sdk create api --group demo --version v1 --kind Nginx",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/nginx-operator",
"cd USDHOME/projects/nginx-operator",
"operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx",
"operator-sdk init --plugins helm --help",
"domain: example.com layout: helm.sdk.operatorframework.io/v1 projectName: helm-operator resources: - group: demo kind: Nginx version: v1 version: 3-alpha",
"Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080",
"make install run",
"{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"FROM quay.io/operator-framework/helm-operator:v1.3.0",
"FROM registry.redhat.io/openshift4/ose-helm-operator:v4.7",
"gcr.io/kubebuilder/kube-rbac-proxy:<tag>",
"registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk olm status --olm-namespace=openshift-operator-lifecycle-manager",
"operator-sdk run bundle [-n <namespace>] \\ 1 <registry>/<user>/<bundle_image_name>:<tag>",
"oc project nginx-operator-system",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get nginx/nginx-sample -o yaml",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7",
"oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m",
"make undeploy",
"operator-sdk cleanup <project_name>",
"apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2",
"{{ .Values.replicaCount }}",
"oc get Tomcats --all-namespaces",
"operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'",
"operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'",
"operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]' operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"spec: relatedImages: 1 - name: etcd-operator 2 image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 3 - name: etcd-image image: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68",
"spec: install: spec: deployments: - name: etcd-operator-v3.1.1 spec: replicas: 1 selector: matchLabels: name: etcd-operator strategy: type: Recreate template: metadata: labels: name: etcd-operator spec: containers: - args: - /opt/etcd/bin/etcd_operator_run.sh env: - name: WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.annotations['olm.targetNamespaces'] - name: ETCD_OPERATOR_DEFAULT_ETCD_IMAGE 1 value: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68 2 - name: ETCD_LOG_LEVEL value: INFO image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 3 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /healthy port: 8080 initialDelaySeconds: 10 periodSeconds: 30 name: etcd-operator readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 10 periodSeconds: 30 resources: {} serviceAccountName: etcd-operator strategy: deployment",
"metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'",
"labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2",
"labels: operatorframework.io/os.linux: supported",
"labels: operatorframework.io/arch.amd64: supported",
"labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2",
"metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1",
"module github.com/example-inc/memcached-operator go 1.15 require ( k8s.io/apimachinery v0.19.2 k8s.io/client-go v0.19.2 sigs.k8s.io/controller-runtime v0.7.0 operator-framework/operator-lib v0.3.0 )",
"import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5",
"- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.",
"required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.",
"versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true",
"customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster",
"versions: - name: v1alpha1 served: false 1 storage: true",
"versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2",
"versions: - name: v1beta1 served: true storage: true",
"metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"default\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk olm status --olm-namespace=openshift-operator-lifecycle-manager",
"operator-sdk run bundle [-n <namespace>] \\ 1 <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1",
"INFO[0009] Successfully created registry pod: quay-io-demo-memcached-operator-v0-0-1 INFO[0009] Created CatalogSource: memcached-operator-catalog INFO[0010] OperatorGroup \"operator-sdk-og\" created INFO[0010] Created Subscription: memcached-operator-v0-0-1-sub INFO[0013] Approved InstallPlan install-bqggr for the Subscription: memcached-operator-v0-0-1-sub INFO[0013] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0013] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to appear INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded",
"operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2",
"INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0009] Successfully created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"",
"operator-sdk cleanup memcached-operator",
"operator-sdk scorecard <bundle_dir_or_image> [flags]",
"operator-sdk scorecard -h",
"./bundle └── tests └── scorecard └── config.yaml",
"kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.3.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.3.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test",
"make bundle",
"operator-sdk scorecard <bundle_dir_or_image>",
"{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.3.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }",
"-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.3.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'",
"apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.3.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.3.0 labels: suite: olm test: olm-bundle-validation-test",
"// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }",
"func ExposeMetricsPort(ctx context.Context, port int32) (*v1.Service, error)",
"import( \"github.com/operator-framework/operator-sdk/pkg/metrics\" \"machine.openshift.io/controller-runtime/pkg/manager\" ) var ( // Change the below variables to serve metrics on a different host or port. metricsHost = \"0.0.0.0\" 1 metricsPort int32 = 8383 2 ) func main() { // Pass metrics address to controller-runtime manager mgr, err := manager.New(cfg, manager.Options{ Namespace: namespace, MetricsBindAddress: fmt.Sprintf(\"%s:%d\", metricsHost, metricsPort), }) // Create Service object to expose the metrics port. _, err = metrics.ExposeMetricsPort(ctx, metricsPort) if err != nil { // handle error log.Info(err.Error()) } }",
"var metricsPort int32 = 8383",
"import( \"k8s.io/api/core/v1\" \"github.com/operator-framework/operator-sdk/pkg/metrics\" \"machine.openshift.io/controller-runtime/pkg/client/config\" ) func main() { // Populate below with the Service(s) for which you want to create ServiceMonitors. services := []*v1.Service{} // Create one ServiceMonitor per application per namespace. // Change the below value to name of the Namespace you want the ServiceMonitor to be created in. ns := \"default\" // restConfig is used for talking to the Kubernetes apiserver restConfig := config.GetConfig() // Pass the Service(s) to the helper function, which in turn returns the array of ServiceMonitor objects. serviceMonitors, err := metrics.CreateServiceMonitors(restConfig, ns, services) if err != nil { // Handle errors here. } }",
"import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }",
"import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh",
"oc get clusteroperator authentication -o yaml",
"oc -n openshift-monitoring edit cm cluster-monitoring-config",
"oc get clusteroperator openshift-controller-manager -o yaml",
"oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml",
"oc edit etcd cluster",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc edit kubeapiserver"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/operators/index |
7.4. Resources | 7.4. Resources 7.4.1. Resources Resources are data sources in a RESTful web service. Each resource type contains a set of common parameters that the REST API abstracts to form a resource representation , usually in XML or JSON. Users can view a resource representation, then edit the parameters and send the representation back to the resource's URL within the API, which modifies the resource. Users can also delete individual resources through REST. A RESTful web service also groups resources into collections . Users can view a representation of all resources in a collection. Users also send resource representations to a specific collection to create a new resource within that particular collection. 7.4.2. Retrieving a Resource Obtain the state of a resource with a GET request on a URI obtained from a collection listing. Include an Accept HTTP header to define the MIME type for the response format. You can obtain additional information from some resources using the All-Content: true header. The RESTful Service Description Language describes which links support this header. 7.4.3. Updating a Resource Modify resource properties with a PUT request containing an updated description from a GET request for the resource URI. Details on modifiable properties are found in the individual resource type documentation. A PUT request requires a Content-Type header. This informs the API of the representation MIME type in the body content as part of the request. Include an Accept HTTP header to define the MIME type for the response format. This does not include immutable resource properties that an API user has attempted to modify. If an attempt is made to modify a strictly immutable resource property, the API reports a conflict with an error message representation in the response body. Properties omitted from the representation are ignored and not changed. 7.4.4. Deleting a Resource Delete a resource with a DELETE request sent to its URI. Include an Accept HTTP header to define the MIME type for the response format. Some cases require optional body content in the DELETE request to specify additional properties. A DELETE request with optional body content requires a Content-Type header to inform the API of the representation MIME type in the body content. If a DELETE request contains no body content, omit the Content-Type header. 7.4.5. Sub-Collection Relationships A sub-collection relationship defines a hierarchical link between a resource and a sub-collection. The sub-collection exists or has some meaning in the context of a parent resource. For example, a virtual machine contains network interfaces, which means the API maps the relationship between the virtual machine resource and the network interfaces sub-collection. Sub-collections are used to model the following relationships types: Where one parent resource can contain several child resources and vice versa. For example, a virtual machine can contain several disks and some disks are shared among multiple virtual machines. Where mapped resources are dependent on a parent resource. Without the parent resource, the dependent resource cannot exist. For example, the link between a virtual machine and snapshots. Where mapped resources exist independently from parent resources but data is still associated with the relationship. For example, the link between a cluster and a network. The API defines a relationship between a resource and a sub-collection using the link rel= attribute: The API user now queries the sub-collection. 7.4.6. XML Element Relationships XML element links act as an alternative to sub-collections to express relationships between resources. XML element links are simply elements with a "href" attribute that points to the linked element. XML element links are used to model simple 1:N mappings between resources without a dependency and without data associated with the relationship. For example, the relationship between a host and a cluster. Examples of such relationships include: Backlinks from a resource in a sub-collection to a parent resource; or Links between resources with an arbitrary relationship. Example 7.7. Backlinking from a sub-collection resource to a resource using an XML element 7.4.7. Actions Most resources include a list of action links to provide functions not achieved through the standard HTTP methods. The API invokes an action with a POST request to the supplied URI. The body of the POST requires an action representation encapsulating common and task-specific parameters. Table 7.6. Common action parameters Element Description async true if the server responds immediately with 202 Accepted and an action representation contains a href link to be polled for completion. grace_period a grace period in milliseconds, which must expire before the action is initiated. Individual actions and their parameters are documented in the individual resource type's documentation. Some parameters are mandatory for specific actions and their absence is indicated with a fault response. An action also requires a Content-Type: application/xml header since the POST request requires an XML representation in the body content. When the action is initiated asynchronously, the immediate 202 Accepted response provides a link to monitor the status of the task: A subsequent GET on the action URI provides an indication of the status of the asynchronous task. Table 7.7. Action statuses Status Description pending Task has not yet started. in_progress Task is in operation. complete Task completed successfully. failed Task failed. The returned action representation would contain a fault describing the failure. Once the task has completed, the action is retained for an indeterminate period. Once this has expired, subsequent GET s are 301 Moved Permanently redirected back to the target resource. An action representation also includes some links that are identified by the rel attribute: Table 7.8. Action relationships Type Description parent A link back to the resource of this action. replay A link back to the original action URI. POSTing to this URI causes the action to be re-initiated. 7.4.8. Permissions Each resource contains a permissions sub-collection. Each permission contains a user , an assigned role and the specified resource. For example: A resource acquires a new permission when an API user sends a POST request with a permission representation and a Content-Type: application/xml header to the resource's permissions sub-collection. Each new permission requires a role and a user : 7.4.9. Handling Errors Some errors require further explanation beyond a standard HTTP status code. For example, the API reports an unsuccessful resource state update or action with a fault representation in the response entity body. The fault contains a reason and detail strings. Clients must accommodate failed requests via extracting the fault or the expected resource representation depending on the response status code. Such cases are clearly indicated in the individual resource documentation. | [
"GET /ovirt-engine/api/ [collection] / [resource_id] HTTP/1.1 Accept: [MIME type]",
"GET /ovirt-engine/api/ [collection] / [resource_id] HTTP/1.1 Accept: [MIME type] All-Content: true",
"PUT /ovirt-engine/api/ collection / resource_id HTTP/1.1 Accept: [MIME type] Content-Type: [MIME type] [body]",
"DELETE /ovirt-engine/api/ [collection] / [resource_id] HTTP/1.1 Accept: [MIME type]",
"GET /ovirt-engine/api/collection/resource_id HTTP/1.1 Accept: application/xml HTTP/1.1 200 OK Content-Type: application/xml <resource id=\"resource_id\" href=\"/ovirt-engine/api/collection/resource_id\"> <link rel=\"subcollection\" href=\"/ovirt-engine/api/collection/resource_id/subcollection\"/> </resource>",
"GET /ovirt-engine/api/collection/resource_id/subcollection HTTP/1.1 Accept: application/xml HTTP/1.1 200 OK Content-Type: application/xml <subcollection> <subresource id=\"subresource_id\" href=\"/ovirt-engine/api/collection/resource_id/subcollection/subresource_id\"> </subresource> </subcollection>",
"GET /ovirt-engine/api/collection/resource_id/subcollection/subresource_id HTTP/1.1 HTTP/1.1 200 OK Content-Type: application/xml <subcollection> <subresource id=\"subresource_id\" href=\"/ovirt-engine/api/collection/resource_id/subcollection/subresource_id\"> <resource id=\"resource_id\" href=\"/ovirt-engine/api/collection/resource_id\"/> </subresource> </subcollection>",
"<resource> <actions> <link rel=\"start\" href=\"/ovirt-engine/api/collection/resource_id/start\"/> <link rel=\"stop\" href=\"/ovirt-engine/api/collection/resource_id/stop\"/> </actions> </resource>",
"POST /ovirt-engine/api/collection/resource_id/action HTTP/1.1 Content-Type: application/xml Accept: application/xml <action> <async>true</async> </action> HTTP/1.1 202 Accepted Content-Type: application/xml <action id=\"action_id\" href=\"/ovirt-engine/api/collection/resource_id/action/action_id\"> <async>true</async> </action>",
"GET /ovirt-engine/api/collection/resource_id/action/action_id HTTP/1.1 Accept: application/xml HTTP/1.1 200 OK Content-Type: application/xml <action id=\"action_id\" href=\"/ovirt-engine/api/collection/resource_id/action/action_id\"> <status> <state>pending</state> </status> <link rel=\"parent\" /ovirt-engine/api/collection/resource_id\"/> <link rel=\"replay\" href=\"/ovirt-engine/api/collection/resource_id/action\"/> </action>",
"GET /ovirt-engine/api/collection/resource_id/permissions HTTP/1.1 Accept: application/xml HTTP/1.1 200 OK Content-Type: application/xml <permissions> <permission id=\"permission-id\" href=\"/ovirt-engine/api/collection/resource_id/permissions/permission_id\"> <role id=\"role_id\" href=\"/ovirt-engine/api/roles/role_id\"/> <user id=\"user_id\" href=\"/ovirt-engine/api/users/user_id\"/> <resource id=\"resource_id\" href=\"/ovirt-engine/api/collection/resource_id\"/> </permission> </permissions>",
"POST /ovirt-engine/api/collection/resource_id/permissions HTTP/1.1 Content-Type: application/xml Accept: application/xml <permission> <role id=\"role_id\"/> <user id=\"user_id\"/> </permission> HTTP/1.1 201 Created Content-Type: application/xml <permission id=\"permission_id\" href=\"/ovirt-engine/api/resources/resource_id/permissions/permission_id\"> <role id=\"role_id\" href=\"/ovirt-engine/api/roles/role_id\"/> <user id=\"user_id\" href=\"/ovirt-engine/api/users/user_id\"/> <resource id=\"resource_id\" href=\"/ovirt-engine/api/collection/resource_id\"/> </permission>",
"PUT /ovirt-engine/api/collection/resource_id HTTP/1.1 Accept: application/xml Content-Type: application/xml <resource> <id>id-update-test</id> </resource> HTTP/1.1 409 Conflict Content-Type: application/xml <fault> <reason>Broken immutability constraint</reason> <detail>Attempt to set immutable field: id</detail> </fault>"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-resources |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.